00:00:00.001 Started by upstream project "autotest-nightly" build number 3882 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3262 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.062 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.095 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.200 > git --version # 'git version 2.39.2' 00:00:00.200 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.788 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.800 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.812 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:05.813 > git config core.sparsecheckout # timeout=10 00:00:05.824 > git read-tree -mu HEAD # timeout=10 00:00:05.841 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:05.861 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:05.862 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:05.966 [Pipeline] Start of Pipeline 00:00:05.981 [Pipeline] library 00:00:05.982 Loading library shm_lib@master 00:00:05.983 Library shm_lib@master is cached. Copying from home. 00:00:05.997 [Pipeline] node 00:00:06.005 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.007 [Pipeline] { 00:00:06.015 [Pipeline] catchError 00:00:06.017 [Pipeline] { 00:00:06.030 [Pipeline] wrap 00:00:06.041 [Pipeline] { 00:00:06.049 [Pipeline] stage 00:00:06.051 [Pipeline] { (Prologue) 00:00:06.222 [Pipeline] sh 00:00:06.502 + logger -p user.info -t JENKINS-CI 00:00:06.518 [Pipeline] echo 00:00:06.520 Node: WFP8 00:00:06.526 [Pipeline] sh 00:00:06.822 [Pipeline] setCustomBuildProperty 00:00:06.835 [Pipeline] echo 00:00:06.836 Cleanup processes 00:00:06.842 [Pipeline] sh 00:00:07.124 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.124 616264 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.137 [Pipeline] sh 00:00:07.424 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.424 ++ grep -v 'sudo pgrep' 00:00:07.424 ++ awk '{print $1}' 00:00:07.424 + sudo kill -9 00:00:07.424 + true 00:00:07.437 [Pipeline] cleanWs 00:00:07.445 [WS-CLEANUP] Deleting project workspace... 00:00:07.445 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.450 [WS-CLEANUP] done 00:00:07.453 [Pipeline] setCustomBuildProperty 00:00:07.462 [Pipeline] sh 00:00:07.740 + sudo git config --global --replace-all safe.directory '*' 00:00:07.813 [Pipeline] httpRequest 00:00:07.832 [Pipeline] echo 00:00:07.834 Sorcerer 10.211.164.101 is alive 00:00:07.843 [Pipeline] httpRequest 00:00:07.847 HttpMethod: GET 00:00:07.848 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:07.849 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:07.851 Response Code: HTTP/1.1 200 OK 00:00:07.852 Success: Status code 200 is in the accepted range: 200,404 00:00:07.853 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:09.167 [Pipeline] sh 00:00:09.447 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:09.463 [Pipeline] httpRequest 00:00:09.493 [Pipeline] echo 00:00:09.494 Sorcerer 10.211.164.101 is alive 00:00:09.502 [Pipeline] httpRequest 00:00:09.507 HttpMethod: GET 00:00:09.508 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:09.509 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:09.527 Response Code: HTTP/1.1 200 OK 00:00:09.528 Success: Status code 200 is in the accepted range: 200,404 00:00:09.528 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:19.962 [Pipeline] sh 00:01:20.245 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:22.796 [Pipeline] sh 00:01:23.080 + git -C spdk log --oneline -n5 00:01:23.080 719d03c6a sock/uring: only register net impl if supported 00:01:23.080 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:23.080 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:23.080 6c7c1f57e accel: add sequence outstanding stat 00:01:23.080 3bc8e6a26 accel: add utility to put task 00:01:23.093 [Pipeline] } 00:01:23.111 [Pipeline] // stage 00:01:23.121 [Pipeline] stage 00:01:23.123 [Pipeline] { (Prepare) 00:01:23.144 [Pipeline] writeFile 00:01:23.164 [Pipeline] sh 00:01:23.448 + logger -p user.info -t JENKINS-CI 00:01:23.465 [Pipeline] sh 00:01:23.752 + logger -p user.info -t JENKINS-CI 00:01:23.764 [Pipeline] sh 00:01:24.050 + cat autorun-spdk.conf 00:01:24.050 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.050 SPDK_TEST_NVMF=1 00:01:24.050 SPDK_TEST_NVME_CLI=1 00:01:24.050 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.050 SPDK_TEST_NVMF_NICS=e810 00:01:24.050 SPDK_RUN_ASAN=1 00:01:24.050 SPDK_RUN_UBSAN=1 00:01:24.050 NET_TYPE=phy 00:01:24.057 RUN_NIGHTLY=1 00:01:24.061 [Pipeline] readFile 00:01:24.082 [Pipeline] withEnv 00:01:24.084 [Pipeline] { 00:01:24.101 [Pipeline] sh 00:01:24.390 + set -ex 00:01:24.390 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:24.390 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.390 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.390 ++ SPDK_TEST_NVMF=1 00:01:24.390 ++ SPDK_TEST_NVME_CLI=1 00:01:24.390 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.390 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.390 ++ SPDK_RUN_ASAN=1 00:01:24.390 ++ SPDK_RUN_UBSAN=1 00:01:24.390 ++ NET_TYPE=phy 00:01:24.390 ++ RUN_NIGHTLY=1 00:01:24.390 + case $SPDK_TEST_NVMF_NICS in 00:01:24.390 + DRIVERS=ice 00:01:24.390 + [[ tcp == \r\d\m\a ]] 00:01:24.390 + [[ -n ice ]] 00:01:24.390 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.390 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.390 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:24.390 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.390 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.390 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.390 + true 00:01:24.390 + for D in $DRIVERS 00:01:24.390 + sudo modprobe ice 00:01:24.390 + exit 0 00:01:24.399 [Pipeline] } 00:01:24.420 [Pipeline] // withEnv 00:01:24.425 [Pipeline] } 00:01:24.443 [Pipeline] // stage 00:01:24.453 [Pipeline] catchError 00:01:24.455 [Pipeline] { 00:01:24.472 [Pipeline] timeout 00:01:24.473 Timeout set to expire in 50 min 00:01:24.474 [Pipeline] { 00:01:24.490 [Pipeline] stage 00:01:24.492 [Pipeline] { (Tests) 00:01:24.508 [Pipeline] sh 00:01:24.793 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.793 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.793 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.793 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:24.793 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.793 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.793 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:24.793 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.793 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.793 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.793 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:24.793 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.794 + source /etc/os-release 00:01:24.794 ++ NAME='Fedora Linux' 00:01:24.794 ++ VERSION='38 (Cloud Edition)' 00:01:24.794 ++ ID=fedora 00:01:24.794 ++ VERSION_ID=38 00:01:24.794 ++ VERSION_CODENAME= 00:01:24.794 ++ PLATFORM_ID=platform:f38 00:01:24.794 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:24.794 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.794 ++ LOGO=fedora-logo-icon 00:01:24.794 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:24.794 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.794 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:24.794 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.794 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.794 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.794 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:24.794 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.794 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:24.794 ++ SUPPORT_END=2024-05-14 00:01:24.794 ++ VARIANT='Cloud Edition' 00:01:24.794 ++ VARIANT_ID=cloud 00:01:24.794 + uname -a 00:01:24.794 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.794 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:26.700 Hugepages 00:01:26.700 node hugesize free / total 00:01:26.700 node0 1048576kB 0 / 0 00:01:26.700 node0 2048kB 0 / 0 00:01:26.700 node1 1048576kB 0 / 0 00:01:26.700 node1 2048kB 0 / 0 00:01:26.700 00:01:26.700 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.700 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:26.700 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:26.700 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:26.700 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:26.700 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:26.700 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:26.700 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:26.700 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:26.960 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:26.960 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:26.960 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:26.960 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:26.960 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:26.960 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:26.960 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:26.960 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:26.960 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:26.960 + rm -f /tmp/spdk-ld-path 00:01:26.960 + source autorun-spdk.conf 00:01:26.960 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.960 ++ SPDK_TEST_NVMF=1 00:01:26.960 ++ SPDK_TEST_NVME_CLI=1 00:01:26.960 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.960 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.960 ++ SPDK_RUN_ASAN=1 00:01:26.960 ++ SPDK_RUN_UBSAN=1 00:01:26.960 ++ NET_TYPE=phy 00:01:26.960 ++ RUN_NIGHTLY=1 00:01:26.960 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.960 + [[ -n '' ]] 00:01:26.960 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.960 + for M in /var/spdk/build-*-manifest.txt 00:01:26.960 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.960 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:26.960 + for M in /var/spdk/build-*-manifest.txt 00:01:26.960 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.960 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:26.960 ++ uname 00:01:26.960 + [[ Linux == \L\i\n\u\x ]] 00:01:26.960 + sudo dmesg -T 00:01:26.960 + sudo dmesg --clear 00:01:26.960 + dmesg_pid=617701 00:01:26.960 + [[ Fedora Linux == FreeBSD ]] 00:01:26.960 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.960 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.960 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.960 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.960 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.960 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.960 + sudo dmesg -Tw 00:01:26.960 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.960 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.960 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.960 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.960 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.960 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.960 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.960 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.960 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.960 Test configuration: 00:01:26.960 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.960 SPDK_TEST_NVMF=1 00:01:26.960 SPDK_TEST_NVME_CLI=1 00:01:26.960 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.960 SPDK_TEST_NVMF_NICS=e810 00:01:26.960 SPDK_RUN_ASAN=1 00:01:26.960 SPDK_RUN_UBSAN=1 00:01:26.960 NET_TYPE=phy 00:01:26.960 RUN_NIGHTLY=1 11:07:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:26.960 11:07:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.960 11:07:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.960 11:07:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.960 11:07:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.960 11:07:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.960 11:07:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.960 11:07:13 -- paths/export.sh@5 -- $ export PATH 00:01:26.960 11:07:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.960 11:07:13 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:26.960 11:07:13 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:26.960 11:07:13 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720775233.XXXXXX 00:01:26.960 11:07:13 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720775233.g6aaVG 00:01:26.960 11:07:13 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:26.960 11:07:13 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:26.960 11:07:13 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:26.960 11:07:13 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:26.960 11:07:13 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.960 11:07:13 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:27.220 11:07:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:27.220 11:07:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.220 11:07:13 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:27.220 11:07:13 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:27.220 11:07:13 -- pm/common@17 -- $ local monitor 00:01:27.220 11:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.220 11:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.220 11:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.220 11:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.220 11:07:13 -- pm/common@25 -- $ sleep 1 00:01:27.220 11:07:13 -- pm/common@21 -- $ date +%s 00:01:27.220 11:07:13 -- pm/common@21 -- $ date +%s 00:01:27.220 11:07:13 -- pm/common@21 -- $ date +%s 00:01:27.220 11:07:13 -- pm/common@21 -- $ date +%s 00:01:27.220 11:07:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775233 00:01:27.220 11:07:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775233 00:01:27.220 11:07:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775233 00:01:27.220 11:07:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775233 00:01:27.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775233_collect-cpu-temp.pm.log 00:01:27.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775233_collect-vmstat.pm.log 00:01:27.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775233_collect-cpu-load.pm.log 00:01:27.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775233_collect-bmc-pm.bmc.pm.log 00:01:28.160 11:07:14 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:28.160 11:07:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.160 11:07:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.160 11:07:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.160 11:07:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.160 Fri Jul 12 09:07:14 AM UTC 2024 00:01:28.160 11:07:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.160 v24.09-pre-202-g719d03c6a 00:01:28.160 11:07:14 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:28.160 11:07:14 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:28.160 11:07:14 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.160 11:07:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.160 11:07:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.160 ************************************ 00:01:28.160 START TEST asan 00:01:28.160 ************************************ 00:01:28.160 11:07:14 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:28.160 using asan 00:01:28.160 00:01:28.160 real 0m0.000s 00:01:28.160 user 0m0.000s 00:01:28.160 sys 0m0.000s 00:01:28.160 11:07:14 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.160 11:07:14 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.160 ************************************ 00:01:28.160 END TEST asan 00:01:28.160 ************************************ 00:01:28.160 11:07:14 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.160 11:07:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.160 11:07:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.160 11:07:14 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.160 11:07:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.160 11:07:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.160 ************************************ 00:01:28.160 START TEST ubsan 00:01:28.160 ************************************ 00:01:28.160 11:07:14 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:28.160 using ubsan 00:01:28.160 00:01:28.160 real 0m0.000s 00:01:28.160 user 0m0.000s 00:01:28.160 sys 0m0.000s 00:01:28.160 11:07:14 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.160 11:07:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.160 ************************************ 00:01:28.160 END TEST ubsan 00:01:28.160 ************************************ 00:01:28.160 11:07:14 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.160 11:07:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.160 11:07:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.160 11:07:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.160 11:07:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.160 11:07:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.160 11:07:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.160 11:07:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.160 11:07:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.160 11:07:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:28.419 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:28.419 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:28.678 Using 'verbs' RDMA provider 00:01:41.836 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:51.821 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:51.821 Creating mk/config.mk...done. 00:01:51.821 Creating mk/cc.flags.mk...done. 00:01:51.821 Type 'make' to build. 00:01:51.821 11:07:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:51.821 11:07:38 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:51.821 11:07:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.821 11:07:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.821 ************************************ 00:01:51.821 START TEST make 00:01:51.821 ************************************ 00:01:51.821 11:07:38 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:52.081 make[1]: Nothing to be done for 'all'. 00:02:00.215 The Meson build system 00:02:00.215 Version: 1.3.1 00:02:00.215 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:00.215 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:00.215 Build type: native build 00:02:00.215 Program cat found: YES (/usr/bin/cat) 00:02:00.215 Project name: DPDK 00:02:00.215 Project version: 24.03.0 00:02:00.215 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.215 C linker for the host machine: cc ld.bfd 2.39-16 00:02:00.215 Host machine cpu family: x86_64 00:02:00.215 Host machine cpu: x86_64 00:02:00.215 Message: ## Building in Developer Mode ## 00:02:00.215 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.215 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.215 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.215 Program python3 found: YES (/usr/bin/python3) 00:02:00.215 Program cat found: YES (/usr/bin/cat) 00:02:00.215 Compiler for C supports arguments -march=native: YES 00:02:00.215 Checking for size of "void *" : 8 00:02:00.215 Checking for size of "void *" : 8 (cached) 00:02:00.215 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:00.215 Library m found: YES 00:02:00.215 Library numa found: YES 00:02:00.215 Has header "numaif.h" : YES 00:02:00.215 Library fdt found: NO 00:02:00.215 Library execinfo found: NO 00:02:00.215 Has header "execinfo.h" : YES 00:02:00.215 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.215 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.215 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.215 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.215 Run-time dependency openssl found: YES 3.0.9 00:02:00.215 Run-time dependency libpcap found: YES 1.10.4 00:02:00.215 Has header "pcap.h" with dependency libpcap: YES 00:02:00.215 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.215 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.215 Compiler for C supports arguments -Wformat: YES 00:02:00.215 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.215 Compiler for C supports arguments -Wformat-security: NO 00:02:00.215 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.215 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.215 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.215 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.215 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.215 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.215 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.215 Compiler for C supports arguments -Wundef: YES 00:02:00.215 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.215 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.215 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.215 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.215 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.215 Program objdump found: YES (/usr/bin/objdump) 00:02:00.215 Compiler for C supports arguments -mavx512f: YES 00:02:00.215 Checking if "AVX512 checking" compiles: YES 00:02:00.215 Fetching value of define "__SSE4_2__" : 1 00:02:00.215 Fetching value of define "__AES__" : 1 00:02:00.215 Fetching value of define "__AVX__" : 1 00:02:00.215 Fetching value of define "__AVX2__" : 1 00:02:00.215 Fetching value of define "__AVX512BW__" : 1 00:02:00.215 Fetching value of define "__AVX512CD__" : 1 00:02:00.215 Fetching value of define "__AVX512DQ__" : 1 00:02:00.215 Fetching value of define "__AVX512F__" : 1 00:02:00.215 Fetching value of define "__AVX512VL__" : 1 00:02:00.215 Fetching value of define "__PCLMUL__" : 1 00:02:00.215 Fetching value of define "__RDRND__" : 1 00:02:00.215 Fetching value of define "__RDSEED__" : 1 00:02:00.215 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.215 Fetching value of define "__znver1__" : (undefined) 00:02:00.215 Fetching value of define "__znver2__" : (undefined) 00:02:00.215 Fetching value of define "__znver3__" : (undefined) 00:02:00.215 Fetching value of define "__znver4__" : (undefined) 00:02:00.215 Library asan found: YES 00:02:00.215 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.215 Message: lib/log: Defining dependency "log" 00:02:00.215 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.215 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.215 Library rt found: YES 00:02:00.215 Checking for function "getentropy" : NO 00:02:00.215 Message: lib/eal: Defining dependency "eal" 00:02:00.215 Message: lib/ring: Defining dependency "ring" 00:02:00.215 Message: lib/rcu: Defining dependency "rcu" 00:02:00.215 Message: lib/mempool: Defining dependency "mempool" 00:02:00.215 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.215 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.216 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.216 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.216 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.216 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.216 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:00.216 Compiler for C supports arguments -mpclmul: YES 00:02:00.216 Compiler for C supports arguments -maes: YES 00:02:00.216 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.216 Compiler for C supports arguments -mavx512bw: YES 00:02:00.216 Compiler for C supports arguments -mavx512dq: YES 00:02:00.216 Compiler for C supports arguments -mavx512vl: YES 00:02:00.216 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.216 Compiler for C supports arguments -mavx2: YES 00:02:00.216 Compiler for C supports arguments -mavx: YES 00:02:00.216 Message: lib/net: Defining dependency "net" 00:02:00.216 Message: lib/meter: Defining dependency "meter" 00:02:00.216 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.216 Message: lib/pci: Defining dependency "pci" 00:02:00.216 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.216 Message: lib/hash: Defining dependency "hash" 00:02:00.216 Message: lib/timer: Defining dependency "timer" 00:02:00.216 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.216 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.216 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.216 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.216 Message: lib/power: Defining dependency "power" 00:02:00.216 Message: lib/reorder: Defining dependency "reorder" 00:02:00.216 Message: lib/security: Defining dependency "security" 00:02:00.216 Has header "linux/userfaultfd.h" : YES 00:02:00.216 Has header "linux/vduse.h" : YES 00:02:00.216 Message: lib/vhost: Defining dependency "vhost" 00:02:00.216 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.216 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.216 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.216 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.216 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.216 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.216 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.216 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.216 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.216 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.216 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.216 Configuring doxy-api-html.conf using configuration 00:02:00.216 Configuring doxy-api-man.conf using configuration 00:02:00.216 Program mandb found: YES (/usr/bin/mandb) 00:02:00.216 Program sphinx-build found: NO 00:02:00.216 Configuring rte_build_config.h using configuration 00:02:00.216 Message: 00:02:00.216 ================= 00:02:00.216 Applications Enabled 00:02:00.216 ================= 00:02:00.216 00:02:00.216 apps: 00:02:00.216 00:02:00.216 00:02:00.216 Message: 00:02:00.216 ================= 00:02:00.216 Libraries Enabled 00:02:00.216 ================= 00:02:00.216 00:02:00.216 libs: 00:02:00.216 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.216 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.216 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.216 00:02:00.216 Message: 00:02:00.216 =============== 00:02:00.216 Drivers Enabled 00:02:00.216 =============== 00:02:00.216 00:02:00.216 common: 00:02:00.216 00:02:00.216 bus: 00:02:00.216 pci, vdev, 00:02:00.216 mempool: 00:02:00.216 ring, 00:02:00.216 dma: 00:02:00.216 00:02:00.216 net: 00:02:00.216 00:02:00.216 crypto: 00:02:00.216 00:02:00.216 compress: 00:02:00.216 00:02:00.216 vdpa: 00:02:00.216 00:02:00.216 00:02:00.216 Message: 00:02:00.216 ================= 00:02:00.216 Content Skipped 00:02:00.216 ================= 00:02:00.216 00:02:00.216 apps: 00:02:00.216 dumpcap: explicitly disabled via build config 00:02:00.216 graph: explicitly disabled via build config 00:02:00.216 pdump: explicitly disabled via build config 00:02:00.216 proc-info: explicitly disabled via build config 00:02:00.216 test-acl: explicitly disabled via build config 00:02:00.216 test-bbdev: explicitly disabled via build config 00:02:00.216 test-cmdline: explicitly disabled via build config 00:02:00.216 test-compress-perf: explicitly disabled via build config 00:02:00.216 test-crypto-perf: explicitly disabled via build config 00:02:00.216 test-dma-perf: explicitly disabled via build config 00:02:00.216 test-eventdev: explicitly disabled via build config 00:02:00.216 test-fib: explicitly disabled via build config 00:02:00.216 test-flow-perf: explicitly disabled via build config 00:02:00.216 test-gpudev: explicitly disabled via build config 00:02:00.216 test-mldev: explicitly disabled via build config 00:02:00.216 test-pipeline: explicitly disabled via build config 00:02:00.216 test-pmd: explicitly disabled via build config 00:02:00.216 test-regex: explicitly disabled via build config 00:02:00.216 test-sad: explicitly disabled via build config 00:02:00.216 test-security-perf: explicitly disabled via build config 00:02:00.216 00:02:00.216 libs: 00:02:00.216 argparse: explicitly disabled via build config 00:02:00.216 metrics: explicitly disabled via build config 00:02:00.216 acl: explicitly disabled via build config 00:02:00.216 bbdev: explicitly disabled via build config 00:02:00.216 bitratestats: explicitly disabled via build config 00:02:00.216 bpf: explicitly disabled via build config 00:02:00.216 cfgfile: explicitly disabled via build config 00:02:00.216 distributor: explicitly disabled via build config 00:02:00.216 efd: explicitly disabled via build config 00:02:00.216 eventdev: explicitly disabled via build config 00:02:00.216 dispatcher: explicitly disabled via build config 00:02:00.216 gpudev: explicitly disabled via build config 00:02:00.216 gro: explicitly disabled via build config 00:02:00.216 gso: explicitly disabled via build config 00:02:00.216 ip_frag: explicitly disabled via build config 00:02:00.216 jobstats: explicitly disabled via build config 00:02:00.216 latencystats: explicitly disabled via build config 00:02:00.216 lpm: explicitly disabled via build config 00:02:00.216 member: explicitly disabled via build config 00:02:00.216 pcapng: explicitly disabled via build config 00:02:00.216 rawdev: explicitly disabled via build config 00:02:00.216 regexdev: explicitly disabled via build config 00:02:00.216 mldev: explicitly disabled via build config 00:02:00.216 rib: explicitly disabled via build config 00:02:00.216 sched: explicitly disabled via build config 00:02:00.216 stack: explicitly disabled via build config 00:02:00.216 ipsec: explicitly disabled via build config 00:02:00.216 pdcp: explicitly disabled via build config 00:02:00.216 fib: explicitly disabled via build config 00:02:00.216 port: explicitly disabled via build config 00:02:00.216 pdump: explicitly disabled via build config 00:02:00.216 table: explicitly disabled via build config 00:02:00.216 pipeline: explicitly disabled via build config 00:02:00.216 graph: explicitly disabled via build config 00:02:00.216 node: explicitly disabled via build config 00:02:00.216 00:02:00.216 drivers: 00:02:00.216 common/cpt: not in enabled drivers build config 00:02:00.216 common/dpaax: not in enabled drivers build config 00:02:00.216 common/iavf: not in enabled drivers build config 00:02:00.216 common/idpf: not in enabled drivers build config 00:02:00.216 common/ionic: not in enabled drivers build config 00:02:00.216 common/mvep: not in enabled drivers build config 00:02:00.216 common/octeontx: not in enabled drivers build config 00:02:00.216 bus/auxiliary: not in enabled drivers build config 00:02:00.216 bus/cdx: not in enabled drivers build config 00:02:00.216 bus/dpaa: not in enabled drivers build config 00:02:00.216 bus/fslmc: not in enabled drivers build config 00:02:00.216 bus/ifpga: not in enabled drivers build config 00:02:00.216 bus/platform: not in enabled drivers build config 00:02:00.216 bus/uacce: not in enabled drivers build config 00:02:00.216 bus/vmbus: not in enabled drivers build config 00:02:00.216 common/cnxk: not in enabled drivers build config 00:02:00.216 common/mlx5: not in enabled drivers build config 00:02:00.216 common/nfp: not in enabled drivers build config 00:02:00.216 common/nitrox: not in enabled drivers build config 00:02:00.216 common/qat: not in enabled drivers build config 00:02:00.216 common/sfc_efx: not in enabled drivers build config 00:02:00.216 mempool/bucket: not in enabled drivers build config 00:02:00.216 mempool/cnxk: not in enabled drivers build config 00:02:00.216 mempool/dpaa: not in enabled drivers build config 00:02:00.216 mempool/dpaa2: not in enabled drivers build config 00:02:00.216 mempool/octeontx: not in enabled drivers build config 00:02:00.216 mempool/stack: not in enabled drivers build config 00:02:00.216 dma/cnxk: not in enabled drivers build config 00:02:00.216 dma/dpaa: not in enabled drivers build config 00:02:00.216 dma/dpaa2: not in enabled drivers build config 00:02:00.216 dma/hisilicon: not in enabled drivers build config 00:02:00.216 dma/idxd: not in enabled drivers build config 00:02:00.216 dma/ioat: not in enabled drivers build config 00:02:00.216 dma/skeleton: not in enabled drivers build config 00:02:00.216 net/af_packet: not in enabled drivers build config 00:02:00.216 net/af_xdp: not in enabled drivers build config 00:02:00.216 net/ark: not in enabled drivers build config 00:02:00.216 net/atlantic: not in enabled drivers build config 00:02:00.216 net/avp: not in enabled drivers build config 00:02:00.216 net/axgbe: not in enabled drivers build config 00:02:00.216 net/bnx2x: not in enabled drivers build config 00:02:00.216 net/bnxt: not in enabled drivers build config 00:02:00.216 net/bonding: not in enabled drivers build config 00:02:00.216 net/cnxk: not in enabled drivers build config 00:02:00.216 net/cpfl: not in enabled drivers build config 00:02:00.216 net/cxgbe: not in enabled drivers build config 00:02:00.216 net/dpaa: not in enabled drivers build config 00:02:00.216 net/dpaa2: not in enabled drivers build config 00:02:00.216 net/e1000: not in enabled drivers build config 00:02:00.216 net/ena: not in enabled drivers build config 00:02:00.216 net/enetc: not in enabled drivers build config 00:02:00.216 net/enetfec: not in enabled drivers build config 00:02:00.216 net/enic: not in enabled drivers build config 00:02:00.216 net/failsafe: not in enabled drivers build config 00:02:00.216 net/fm10k: not in enabled drivers build config 00:02:00.216 net/gve: not in enabled drivers build config 00:02:00.216 net/hinic: not in enabled drivers build config 00:02:00.216 net/hns3: not in enabled drivers build config 00:02:00.216 net/i40e: not in enabled drivers build config 00:02:00.216 net/iavf: not in enabled drivers build config 00:02:00.217 net/ice: not in enabled drivers build config 00:02:00.217 net/idpf: not in enabled drivers build config 00:02:00.217 net/igc: not in enabled drivers build config 00:02:00.217 net/ionic: not in enabled drivers build config 00:02:00.217 net/ipn3ke: not in enabled drivers build config 00:02:00.217 net/ixgbe: not in enabled drivers build config 00:02:00.217 net/mana: not in enabled drivers build config 00:02:00.217 net/memif: not in enabled drivers build config 00:02:00.217 net/mlx4: not in enabled drivers build config 00:02:00.217 net/mlx5: not in enabled drivers build config 00:02:00.217 net/mvneta: not in enabled drivers build config 00:02:00.217 net/mvpp2: not in enabled drivers build config 00:02:00.217 net/netvsc: not in enabled drivers build config 00:02:00.217 net/nfb: not in enabled drivers build config 00:02:00.217 net/nfp: not in enabled drivers build config 00:02:00.217 net/ngbe: not in enabled drivers build config 00:02:00.217 net/null: not in enabled drivers build config 00:02:00.217 net/octeontx: not in enabled drivers build config 00:02:00.217 net/octeon_ep: not in enabled drivers build config 00:02:00.217 net/pcap: not in enabled drivers build config 00:02:00.217 net/pfe: not in enabled drivers build config 00:02:00.217 net/qede: not in enabled drivers build config 00:02:00.217 net/ring: not in enabled drivers build config 00:02:00.217 net/sfc: not in enabled drivers build config 00:02:00.217 net/softnic: not in enabled drivers build config 00:02:00.217 net/tap: not in enabled drivers build config 00:02:00.217 net/thunderx: not in enabled drivers build config 00:02:00.217 net/txgbe: not in enabled drivers build config 00:02:00.217 net/vdev_netvsc: not in enabled drivers build config 00:02:00.217 net/vhost: not in enabled drivers build config 00:02:00.217 net/virtio: not in enabled drivers build config 00:02:00.217 net/vmxnet3: not in enabled drivers build config 00:02:00.217 raw/*: missing internal dependency, "rawdev" 00:02:00.217 crypto/armv8: not in enabled drivers build config 00:02:00.217 crypto/bcmfs: not in enabled drivers build config 00:02:00.217 crypto/caam_jr: not in enabled drivers build config 00:02:00.217 crypto/ccp: not in enabled drivers build config 00:02:00.217 crypto/cnxk: not in enabled drivers build config 00:02:00.217 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.217 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.217 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.217 crypto/mlx5: not in enabled drivers build config 00:02:00.217 crypto/mvsam: not in enabled drivers build config 00:02:00.217 crypto/nitrox: not in enabled drivers build config 00:02:00.217 crypto/null: not in enabled drivers build config 00:02:00.217 crypto/octeontx: not in enabled drivers build config 00:02:00.217 crypto/openssl: not in enabled drivers build config 00:02:00.217 crypto/scheduler: not in enabled drivers build config 00:02:00.217 crypto/uadk: not in enabled drivers build config 00:02:00.217 crypto/virtio: not in enabled drivers build config 00:02:00.217 compress/isal: not in enabled drivers build config 00:02:00.217 compress/mlx5: not in enabled drivers build config 00:02:00.217 compress/nitrox: not in enabled drivers build config 00:02:00.217 compress/octeontx: not in enabled drivers build config 00:02:00.217 compress/zlib: not in enabled drivers build config 00:02:00.217 regex/*: missing internal dependency, "regexdev" 00:02:00.217 ml/*: missing internal dependency, "mldev" 00:02:00.217 vdpa/ifc: not in enabled drivers build config 00:02:00.217 vdpa/mlx5: not in enabled drivers build config 00:02:00.217 vdpa/nfp: not in enabled drivers build config 00:02:00.217 vdpa/sfc: not in enabled drivers build config 00:02:00.217 event/*: missing internal dependency, "eventdev" 00:02:00.217 baseband/*: missing internal dependency, "bbdev" 00:02:00.217 gpu/*: missing internal dependency, "gpudev" 00:02:00.217 00:02:00.217 00:02:00.217 Build targets in project: 85 00:02:00.217 00:02:00.217 DPDK 24.03.0 00:02:00.217 00:02:00.217 User defined options 00:02:00.217 buildtype : debug 00:02:00.217 default_library : shared 00:02:00.217 libdir : lib 00:02:00.217 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:00.217 b_sanitize : address 00:02:00.217 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.217 c_link_args : 00:02:00.217 cpu_instruction_set: native 00:02:00.217 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:00.217 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:00.217 enable_docs : false 00:02:00.217 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.217 enable_kmods : false 00:02:00.217 max_lcores : 128 00:02:00.217 tests : false 00:02:00.217 00:02:00.217 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.217 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:00.217 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.217 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.217 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.217 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.217 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.217 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.217 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.217 [8/268] Linking static target lib/librte_kvargs.a 00:02:00.477 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.477 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.478 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.478 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.478 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.478 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.478 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.478 [16/268] Linking static target lib/librte_log.a 00:02:00.478 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.478 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.478 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.478 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.478 [21/268] Linking static target lib/librte_pci.a 00:02:00.478 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.739 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.739 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.739 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.739 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.739 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.739 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.739 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.739 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.739 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.739 [32/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.739 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.739 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.739 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.739 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.739 [37/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.739 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.739 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.739 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.739 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.739 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.739 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.739 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.739 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.739 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.739 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.739 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.739 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.739 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.739 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.739 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.739 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.739 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.739 [55/268] Linking static target lib/librte_meter.a 00:02:00.739 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.739 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.739 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.739 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.739 [60/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.998 [61/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.998 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.998 [63/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.998 [64/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.998 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.998 [66/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:00.998 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.998 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.999 [69/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.999 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.999 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.999 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.999 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.999 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.999 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.999 [76/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.999 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.999 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.999 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.999 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.999 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.999 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.999 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.999 [84/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:00.999 [85/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.999 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.999 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.999 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.999 [89/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.999 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.999 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:00.999 [92/268] Linking static target lib/librte_ring.a 00:02:00.999 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.999 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.999 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.999 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.999 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.999 [98/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.999 [99/268] Linking static target lib/librte_telemetry.a 00:02:00.999 [100/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.999 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.999 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.999 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.999 [104/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.999 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.999 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.999 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.999 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.999 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.999 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.999 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.999 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.999 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.999 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:00.999 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.999 [116/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.999 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.999 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:00.999 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.999 [120/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.999 [121/268] Linking static target lib/librte_cmdline.a 00:02:00.999 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.999 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.999 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.258 [125/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.258 [126/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.258 [127/268] Linking static target lib/librte_mempool.a 00:02:01.258 [128/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.258 [129/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.258 [130/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.258 [131/268] Linking static target lib/librte_rcu.a 00:02:01.258 [132/268] Linking target lib/librte_log.so.24.1 00:02:01.258 [133/268] Linking static target lib/librte_net.a 00:02:01.258 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.258 [135/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.258 [136/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.258 [137/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.258 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.258 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:01.258 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.258 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.258 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.258 [143/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.258 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.258 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.258 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.258 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:01.258 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:01.258 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.258 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.258 [151/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:01.258 [152/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.258 [153/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.258 [154/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.258 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.517 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.517 [157/268] Linking static target lib/librte_eal.a 00:02:01.517 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.517 [159/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:01.517 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:01.517 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:01.517 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:01.517 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.517 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.517 [165/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.517 [166/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.517 [167/268] Linking static target lib/librte_timer.a 00:02:01.517 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.517 [169/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.517 [170/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.517 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:01.517 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.517 [173/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.517 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:01.517 [175/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:01.517 [176/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.517 [177/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.517 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:01.517 [179/268] Linking static target lib/librte_dmadev.a 00:02:01.517 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.517 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.517 [182/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.517 [183/268] Linking static target lib/librte_power.a 00:02:01.517 [184/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.517 [185/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.517 [186/268] Linking static target drivers/librte_bus_vdev.a 00:02:01.517 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.517 [188/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.517 [189/268] Linking static target lib/librte_compressdev.a 00:02:01.517 [190/268] Linking static target lib/librte_reorder.a 00:02:01.517 [191/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.517 [192/268] Linking static target lib/librte_mbuf.a 00:02:01.777 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:01.777 [194/268] Linking static target lib/librte_security.a 00:02:01.777 [195/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.777 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.777 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:01.777 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.777 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.777 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.777 [201/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.777 [202/268] Linking static target drivers/librte_bus_pci.a 00:02:01.777 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.777 [204/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.036 [205/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.036 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.036 [207/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.036 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.036 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.036 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:02.036 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.036 [212/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.036 [213/268] Linking static target lib/librte_hash.a 00:02:02.036 [214/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.294 [215/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.294 [216/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.294 [217/268] Linking static target lib/librte_cryptodev.a 00:02:02.294 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.294 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.294 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.294 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.294 [222/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.553 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.811 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.811 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:02.812 [226/268] Linking static target lib/librte_ethdev.a 00:02:03.744 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:04.003 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.292 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:07.292 [230/268] Linking static target lib/librte_vhost.a 00:02:08.670 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.576 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.144 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.144 [234/268] Linking target lib/librte_eal.so.24.1 00:02:11.144 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:11.403 [236/268] Linking target lib/librte_timer.so.24.1 00:02:11.403 [237/268] Linking target lib/librte_meter.so.24.1 00:02:11.403 [238/268] Linking target lib/librte_ring.so.24.1 00:02:11.403 [239/268] Linking target lib/librte_pci.so.24.1 00:02:11.403 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:11.403 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:11.403 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:11.403 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:11.403 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:11.403 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:11.403 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:11.403 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:11.403 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:11.403 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:11.662 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:11.662 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:11.662 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:11.662 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:11.920 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:11.920 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:11.920 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:11.920 [257/268] Linking target lib/librte_net.so.24.1 00:02:11.920 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:11.920 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:11.920 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:11.920 [261/268] Linking target lib/librte_hash.so.24.1 00:02:11.920 [262/268] Linking target lib/librte_security.so.24.1 00:02:11.920 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:12.179 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:12.179 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:12.179 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:12.179 [267/268] Linking target lib/librte_power.so.24.1 00:02:12.179 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:12.179 INFO: autodetecting backend as ninja 00:02:12.179 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:13.117 CC lib/ut_mock/mock.o 00:02:13.377 CC lib/log/log.o 00:02:13.377 CC lib/log/log_flags.o 00:02:13.377 CC lib/log/log_deprecated.o 00:02:13.377 CC lib/ut/ut.o 00:02:13.377 LIB libspdk_ut_mock.a 00:02:13.377 LIB libspdk_log.a 00:02:13.377 SO libspdk_ut_mock.so.6.0 00:02:13.377 LIB libspdk_ut.a 00:02:13.377 SO libspdk_log.so.7.0 00:02:13.377 SO libspdk_ut.so.2.0 00:02:13.377 SYMLINK libspdk_ut_mock.so 00:02:13.636 SYMLINK libspdk_log.so 00:02:13.636 SYMLINK libspdk_ut.so 00:02:13.895 CC lib/util/base64.o 00:02:13.895 CC lib/ioat/ioat.o 00:02:13.895 CC lib/util/bit_array.o 00:02:13.895 CC lib/util/cpuset.o 00:02:13.895 CC lib/util/crc16.o 00:02:13.895 CC lib/util/crc32.o 00:02:13.895 CC lib/util/crc32c.o 00:02:13.895 CC lib/util/crc32_ieee.o 00:02:13.895 CC lib/util/crc64.o 00:02:13.895 CC lib/util/dif.o 00:02:13.895 CC lib/util/fd.o 00:02:13.895 CC lib/util/hexlify.o 00:02:13.895 CC lib/util/file.o 00:02:13.895 CC lib/util/iov.o 00:02:13.895 CC lib/util/pipe.o 00:02:13.895 CC lib/util/math.o 00:02:13.895 CC lib/util/string.o 00:02:13.895 CC lib/util/strerror_tls.o 00:02:13.895 CC lib/util/uuid.o 00:02:13.895 CC lib/util/zipf.o 00:02:13.895 CC lib/util/xor.o 00:02:13.895 CC lib/util/fd_group.o 00:02:13.895 CC lib/dma/dma.o 00:02:13.895 CXX lib/trace_parser/trace.o 00:02:13.895 CC lib/vfio_user/host/vfio_user_pci.o 00:02:13.895 CC lib/vfio_user/host/vfio_user.o 00:02:13.895 LIB libspdk_dma.a 00:02:14.153 SO libspdk_dma.so.4.0 00:02:14.153 LIB libspdk_ioat.a 00:02:14.153 SYMLINK libspdk_dma.so 00:02:14.153 SO libspdk_ioat.so.7.0 00:02:14.153 SYMLINK libspdk_ioat.so 00:02:14.153 LIB libspdk_vfio_user.a 00:02:14.153 SO libspdk_vfio_user.so.5.0 00:02:14.412 SYMLINK libspdk_vfio_user.so 00:02:14.412 LIB libspdk_util.a 00:02:14.412 SO libspdk_util.so.9.1 00:02:14.412 SYMLINK libspdk_util.so 00:02:14.671 LIB libspdk_trace_parser.a 00:02:14.671 SO libspdk_trace_parser.so.5.0 00:02:14.671 SYMLINK libspdk_trace_parser.so 00:02:14.929 CC lib/json/json_parse.o 00:02:14.929 CC lib/json/json_util.o 00:02:14.929 CC lib/json/json_write.o 00:02:14.929 CC lib/conf/conf.o 00:02:14.929 CC lib/idxd/idxd.o 00:02:14.929 CC lib/idxd/idxd_user.o 00:02:14.929 CC lib/idxd/idxd_kernel.o 00:02:14.929 CC lib/env_dpdk/env.o 00:02:14.929 CC lib/rdma_provider/common.o 00:02:14.929 CC lib/env_dpdk/memory.o 00:02:14.929 CC lib/env_dpdk/init.o 00:02:14.929 CC lib/env_dpdk/pci.o 00:02:14.929 CC lib/vmd/led.o 00:02:14.929 CC lib/env_dpdk/threads.o 00:02:14.929 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:14.929 CC lib/vmd/vmd.o 00:02:14.929 CC lib/env_dpdk/pci_ioat.o 00:02:14.929 CC lib/env_dpdk/pci_virtio.o 00:02:14.929 CC lib/env_dpdk/pci_vmd.o 00:02:14.929 CC lib/env_dpdk/pci_idxd.o 00:02:14.929 CC lib/rdma_utils/rdma_utils.o 00:02:14.929 CC lib/env_dpdk/pci_event.o 00:02:14.929 CC lib/env_dpdk/sigbus_handler.o 00:02:14.929 CC lib/env_dpdk/pci_dpdk.o 00:02:14.929 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:14.929 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:14.929 LIB libspdk_rdma_provider.a 00:02:15.187 LIB libspdk_conf.a 00:02:15.187 SO libspdk_rdma_provider.so.6.0 00:02:15.187 SO libspdk_conf.so.6.0 00:02:15.187 LIB libspdk_json.a 00:02:15.187 LIB libspdk_rdma_utils.a 00:02:15.187 SO libspdk_rdma_utils.so.1.0 00:02:15.187 SYMLINK libspdk_rdma_provider.so 00:02:15.187 SO libspdk_json.so.6.0 00:02:15.187 SYMLINK libspdk_conf.so 00:02:15.187 SYMLINK libspdk_rdma_utils.so 00:02:15.188 SYMLINK libspdk_json.so 00:02:15.446 LIB libspdk_idxd.a 00:02:15.446 SO libspdk_idxd.so.12.0 00:02:15.446 LIB libspdk_vmd.a 00:02:15.446 CC lib/jsonrpc/jsonrpc_server.o 00:02:15.446 CC lib/jsonrpc/jsonrpc_client.o 00:02:15.446 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:15.446 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:15.446 SYMLINK libspdk_idxd.so 00:02:15.446 SO libspdk_vmd.so.6.0 00:02:15.704 SYMLINK libspdk_vmd.so 00:02:15.704 LIB libspdk_jsonrpc.a 00:02:15.704 SO libspdk_jsonrpc.so.6.0 00:02:15.963 SYMLINK libspdk_jsonrpc.so 00:02:16.221 LIB libspdk_env_dpdk.a 00:02:16.221 CC lib/rpc/rpc.o 00:02:16.221 SO libspdk_env_dpdk.so.14.1 00:02:16.221 SYMLINK libspdk_env_dpdk.so 00:02:16.221 LIB libspdk_rpc.a 00:02:16.479 SO libspdk_rpc.so.6.0 00:02:16.479 SYMLINK libspdk_rpc.so 00:02:16.737 CC lib/notify/notify.o 00:02:16.737 CC lib/notify/notify_rpc.o 00:02:16.737 CC lib/trace/trace.o 00:02:16.737 CC lib/trace/trace_flags.o 00:02:16.737 CC lib/trace/trace_rpc.o 00:02:16.737 CC lib/keyring/keyring.o 00:02:16.737 CC lib/keyring/keyring_rpc.o 00:02:16.996 LIB libspdk_notify.a 00:02:16.996 SO libspdk_notify.so.6.0 00:02:16.996 LIB libspdk_trace.a 00:02:16.996 LIB libspdk_keyring.a 00:02:16.996 SO libspdk_trace.so.10.0 00:02:16.996 SYMLINK libspdk_notify.so 00:02:16.996 SO libspdk_keyring.so.1.0 00:02:16.996 SYMLINK libspdk_trace.so 00:02:16.996 SYMLINK libspdk_keyring.so 00:02:17.254 CC lib/thread/thread.o 00:02:17.254 CC lib/thread/iobuf.o 00:02:17.254 CC lib/sock/sock.o 00:02:17.254 CC lib/sock/sock_rpc.o 00:02:17.822 LIB libspdk_sock.a 00:02:17.822 SO libspdk_sock.so.10.0 00:02:17.822 SYMLINK libspdk_sock.so 00:02:18.081 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.081 CC lib/nvme/nvme_ctrlr.o 00:02:18.081 CC lib/nvme/nvme_fabric.o 00:02:18.081 CC lib/nvme/nvme_ns_cmd.o 00:02:18.081 CC lib/nvme/nvme_ns.o 00:02:18.081 CC lib/nvme/nvme_pcie_common.o 00:02:18.081 CC lib/nvme/nvme.o 00:02:18.081 CC lib/nvme/nvme_pcie.o 00:02:18.081 CC lib/nvme/nvme_qpair.o 00:02:18.081 CC lib/nvme/nvme_quirks.o 00:02:18.081 CC lib/nvme/nvme_transport.o 00:02:18.081 CC lib/nvme/nvme_discovery.o 00:02:18.081 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:18.081 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:18.081 CC lib/nvme/nvme_tcp.o 00:02:18.081 CC lib/nvme/nvme_opal.o 00:02:18.081 CC lib/nvme/nvme_io_msg.o 00:02:18.081 CC lib/nvme/nvme_poll_group.o 00:02:18.081 CC lib/nvme/nvme_zns.o 00:02:18.081 CC lib/nvme/nvme_stubs.o 00:02:18.081 CC lib/nvme/nvme_auth.o 00:02:18.081 CC lib/nvme/nvme_cuse.o 00:02:18.081 CC lib/nvme/nvme_rdma.o 00:02:18.650 LIB libspdk_thread.a 00:02:18.650 SO libspdk_thread.so.10.1 00:02:18.910 SYMLINK libspdk_thread.so 00:02:19.167 CC lib/init/json_config.o 00:02:19.167 CC lib/init/subsystem.o 00:02:19.167 CC lib/init/subsystem_rpc.o 00:02:19.167 CC lib/init/rpc.o 00:02:19.167 CC lib/virtio/virtio_vhost_user.o 00:02:19.167 CC lib/virtio/virtio.o 00:02:19.167 CC lib/virtio/virtio_vfio_user.o 00:02:19.167 CC lib/virtio/virtio_pci.o 00:02:19.167 CC lib/blob/blobstore.o 00:02:19.167 CC lib/blob/request.o 00:02:19.167 CC lib/blob/zeroes.o 00:02:19.167 CC lib/blob/blob_bs_dev.o 00:02:19.167 CC lib/accel/accel.o 00:02:19.167 CC lib/accel/accel_rpc.o 00:02:19.167 CC lib/accel/accel_sw.o 00:02:19.167 LIB libspdk_init.a 00:02:19.426 SO libspdk_init.so.5.0 00:02:19.426 LIB libspdk_virtio.a 00:02:19.426 SYMLINK libspdk_init.so 00:02:19.426 SO libspdk_virtio.so.7.0 00:02:19.426 SYMLINK libspdk_virtio.so 00:02:19.685 CC lib/event/app.o 00:02:19.685 CC lib/event/reactor.o 00:02:19.685 CC lib/event/log_rpc.o 00:02:19.685 CC lib/event/app_rpc.o 00:02:19.685 CC lib/event/scheduler_static.o 00:02:19.943 LIB libspdk_accel.a 00:02:19.943 SO libspdk_accel.so.15.1 00:02:19.943 LIB libspdk_nvme.a 00:02:20.202 SYMLINK libspdk_accel.so 00:02:20.202 LIB libspdk_event.a 00:02:20.202 SO libspdk_nvme.so.13.1 00:02:20.202 SO libspdk_event.so.14.0 00:02:20.202 SYMLINK libspdk_event.so 00:02:20.461 CC lib/bdev/bdev.o 00:02:20.461 CC lib/bdev/bdev_rpc.o 00:02:20.461 CC lib/bdev/bdev_zone.o 00:02:20.461 CC lib/bdev/part.o 00:02:20.461 CC lib/bdev/scsi_nvme.o 00:02:20.461 SYMLINK libspdk_nvme.so 00:02:21.915 LIB libspdk_blob.a 00:02:21.916 SO libspdk_blob.so.11.0 00:02:22.217 SYMLINK libspdk_blob.so 00:02:22.476 CC lib/blobfs/blobfs.o 00:02:22.476 CC lib/blobfs/tree.o 00:02:22.476 CC lib/lvol/lvol.o 00:02:22.735 LIB libspdk_bdev.a 00:02:22.735 SO libspdk_bdev.so.15.1 00:02:22.993 SYMLINK libspdk_bdev.so 00:02:22.993 LIB libspdk_blobfs.a 00:02:23.251 CC lib/ublk/ublk.o 00:02:23.251 CC lib/ublk/ublk_rpc.o 00:02:23.251 CC lib/ftl/ftl_core.o 00:02:23.251 CC lib/ftl/ftl_layout.o 00:02:23.251 CC lib/ftl/ftl_init.o 00:02:23.251 CC lib/ftl/ftl_debug.o 00:02:23.251 CC lib/nvmf/ctrlr_discovery.o 00:02:23.251 CC lib/nvmf/ctrlr.o 00:02:23.251 CC lib/ftl/ftl_io.o 00:02:23.251 CC lib/nvmf/ctrlr_bdev.o 00:02:23.251 CC lib/ftl/ftl_sb.o 00:02:23.251 CC lib/nvmf/subsystem.o 00:02:23.251 CC lib/nvmf/nvmf_rpc.o 00:02:23.251 CC lib/ftl/ftl_l2p.o 00:02:23.251 CC lib/ftl/ftl_nv_cache.o 00:02:23.251 CC lib/ftl/ftl_l2p_flat.o 00:02:23.251 CC lib/nvmf/nvmf.o 00:02:23.251 CC lib/scsi/dev.o 00:02:23.251 CC lib/nbd/nbd.o 00:02:23.251 CC lib/ftl/ftl_band.o 00:02:23.251 CC lib/nbd/nbd_rpc.o 00:02:23.251 CC lib/nvmf/stubs.o 00:02:23.251 CC lib/nvmf/tcp.o 00:02:23.251 CC lib/nvmf/transport.o 00:02:23.251 SO libspdk_blobfs.so.10.0 00:02:23.251 CC lib/scsi/lun.o 00:02:23.251 CC lib/ftl/ftl_band_ops.o 00:02:23.251 CC lib/ftl/ftl_writer.o 00:02:23.251 CC lib/scsi/port.o 00:02:23.251 CC lib/nvmf/mdns_server.o 00:02:23.251 CC lib/nvmf/rdma.o 00:02:23.251 CC lib/scsi/scsi.o 00:02:23.251 CC lib/ftl/ftl_rq.o 00:02:23.251 CC lib/scsi/scsi_pr.o 00:02:23.251 CC lib/ftl/ftl_reloc.o 00:02:23.251 CC lib/scsi/scsi_bdev.o 00:02:23.251 CC lib/scsi/scsi_rpc.o 00:02:23.251 CC lib/nvmf/auth.o 00:02:23.251 CC lib/ftl/ftl_l2p_cache.o 00:02:23.251 CC lib/ftl/ftl_p2l.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.251 CC lib/scsi/task.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:23.251 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:23.251 CC lib/ftl/utils/ftl_conf.o 00:02:23.251 CC lib/ftl/utils/ftl_md.o 00:02:23.251 CC lib/ftl/utils/ftl_mempool.o 00:02:23.251 CC lib/ftl/utils/ftl_bitmap.o 00:02:23.251 CC lib/ftl/utils/ftl_property.o 00:02:23.252 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:23.252 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:23.252 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:23.252 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:23.252 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:23.252 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:23.252 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:23.252 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:23.252 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:23.252 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:23.252 CC lib/ftl/base/ftl_base_dev.o 00:02:23.252 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:23.252 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.252 CC lib/ftl/ftl_trace.o 00:02:23.252 LIB libspdk_lvol.a 00:02:23.252 SO libspdk_lvol.so.10.0 00:02:23.252 SYMLINK libspdk_blobfs.so 00:02:23.510 SYMLINK libspdk_lvol.so 00:02:23.767 LIB libspdk_nbd.a 00:02:23.767 SO libspdk_nbd.so.7.0 00:02:23.767 LIB libspdk_scsi.a 00:02:24.025 SO libspdk_scsi.so.9.0 00:02:24.025 SYMLINK libspdk_nbd.so 00:02:24.025 SYMLINK libspdk_scsi.so 00:02:24.025 LIB libspdk_ublk.a 00:02:24.025 SO libspdk_ublk.so.3.0 00:02:24.025 SYMLINK libspdk_ublk.so 00:02:24.334 CC lib/iscsi/init_grp.o 00:02:24.334 CC lib/iscsi/conn.o 00:02:24.334 CC lib/iscsi/md5.o 00:02:24.334 CC lib/vhost/vhost.o 00:02:24.334 CC lib/iscsi/param.o 00:02:24.334 CC lib/vhost/vhost_scsi.o 00:02:24.334 LIB libspdk_ftl.a 00:02:24.334 CC lib/iscsi/iscsi.o 00:02:24.334 CC lib/vhost/vhost_rpc.o 00:02:24.334 CC lib/vhost/rte_vhost_user.o 00:02:24.334 CC lib/iscsi/portal_grp.o 00:02:24.334 CC lib/vhost/vhost_blk.o 00:02:24.334 CC lib/iscsi/tgt_node.o 00:02:24.334 CC lib/iscsi/iscsi_subsystem.o 00:02:24.334 CC lib/iscsi/iscsi_rpc.o 00:02:24.334 CC lib/iscsi/task.o 00:02:24.334 SO libspdk_ftl.so.9.0 00:02:24.899 SYMLINK libspdk_ftl.so 00:02:25.157 LIB libspdk_vhost.a 00:02:25.157 SO libspdk_vhost.so.8.0 00:02:25.415 SYMLINK libspdk_vhost.so 00:02:25.415 LIB libspdk_nvmf.a 00:02:25.415 SO libspdk_nvmf.so.18.1 00:02:25.673 LIB libspdk_iscsi.a 00:02:25.673 SO libspdk_iscsi.so.8.0 00:02:25.673 SYMLINK libspdk_nvmf.so 00:02:25.673 SYMLINK libspdk_iscsi.so 00:02:26.240 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.240 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.498 CC module/accel/iaa/accel_iaa.o 00:02:26.499 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.499 CC module/keyring/linux/keyring.o 00:02:26.499 CC module/keyring/linux/keyring_rpc.o 00:02:26.499 CC module/sock/posix/posix.o 00:02:26.499 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.499 LIB libspdk_env_dpdk_rpc.a 00:02:26.499 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.499 CC module/accel/ioat/accel_ioat.o 00:02:26.499 CC module/accel/dsa/accel_dsa.o 00:02:26.499 CC module/keyring/file/keyring.o 00:02:26.499 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.499 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.499 CC module/keyring/file/keyring_rpc.o 00:02:26.499 CC module/blob/bdev/blob_bdev.o 00:02:26.499 CC module/accel/error/accel_error.o 00:02:26.499 CC module/accel/error/accel_error_rpc.o 00:02:26.499 SO libspdk_env_dpdk_rpc.so.6.0 00:02:26.499 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.499 LIB libspdk_keyring_linux.a 00:02:26.499 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.499 LIB libspdk_scheduler_gscheduler.a 00:02:26.499 LIB libspdk_keyring_file.a 00:02:26.499 SO libspdk_keyring_linux.so.1.0 00:02:26.499 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:26.499 LIB libspdk_accel_iaa.a 00:02:26.499 LIB libspdk_accel_error.a 00:02:26.499 SO libspdk_scheduler_gscheduler.so.4.0 00:02:26.499 SO libspdk_keyring_file.so.1.0 00:02:26.499 LIB libspdk_scheduler_dynamic.a 00:02:26.499 LIB libspdk_accel_ioat.a 00:02:26.499 SO libspdk_accel_iaa.so.3.0 00:02:26.499 SO libspdk_accel_error.so.2.0 00:02:26.499 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.499 SYMLINK libspdk_keyring_linux.so 00:02:26.757 SO libspdk_scheduler_dynamic.so.4.0 00:02:26.757 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.757 SO libspdk_accel_ioat.so.6.0 00:02:26.757 SYMLINK libspdk_keyring_file.so 00:02:26.757 LIB libspdk_accel_dsa.a 00:02:26.757 LIB libspdk_blob_bdev.a 00:02:26.757 SYMLINK libspdk_accel_error.so 00:02:26.757 SYMLINK libspdk_accel_iaa.so 00:02:26.757 SO libspdk_accel_dsa.so.5.0 00:02:26.757 SO libspdk_blob_bdev.so.11.0 00:02:26.757 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.757 SYMLINK libspdk_accel_ioat.so 00:02:26.757 SYMLINK libspdk_accel_dsa.so 00:02:26.757 SYMLINK libspdk_blob_bdev.so 00:02:27.015 LIB libspdk_sock_posix.a 00:02:27.273 SO libspdk_sock_posix.so.6.0 00:02:27.273 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.273 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.273 CC module/bdev/gpt/gpt.o 00:02:27.273 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.273 CC module/bdev/error/vbdev_error.o 00:02:27.273 CC module/bdev/delay/vbdev_delay.o 00:02:27.273 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.273 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.273 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.273 CC module/bdev/null/bdev_null.o 00:02:27.273 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.273 CC module/bdev/malloc/bdev_malloc.o 00:02:27.273 CC module/bdev/null/bdev_null_rpc.o 00:02:27.273 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.273 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.273 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.273 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.273 CC module/bdev/aio/bdev_aio.o 00:02:27.273 CC module/bdev/split/vbdev_split.o 00:02:27.273 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.273 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.273 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.273 CC module/bdev/raid/bdev_raid.o 00:02:27.273 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.273 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.273 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.273 CC module/bdev/raid/raid1.o 00:02:27.273 CC module/bdev/nvme/bdev_nvme.o 00:02:27.273 CC module/bdev/raid/raid0.o 00:02:27.273 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.273 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.273 CC module/bdev/raid/concat.o 00:02:27.273 CC module/bdev/nvme/nvme_rpc.o 00:02:27.273 CC module/bdev/nvme/vbdev_opal.o 00:02:27.273 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.273 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.273 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.273 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.273 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.273 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.273 CC module/bdev/ftl/bdev_ftl.o 00:02:27.273 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.273 SYMLINK libspdk_sock_posix.so 00:02:27.531 LIB libspdk_blobfs_bdev.a 00:02:27.531 SO libspdk_blobfs_bdev.so.6.0 00:02:27.531 LIB libspdk_bdev_gpt.a 00:02:27.531 LIB libspdk_bdev_split.a 00:02:27.531 LIB libspdk_bdev_null.a 00:02:27.531 SYMLINK libspdk_blobfs_bdev.so 00:02:27.531 LIB libspdk_bdev_error.a 00:02:27.531 SO libspdk_bdev_split.so.6.0 00:02:27.531 SO libspdk_bdev_gpt.so.6.0 00:02:27.531 SO libspdk_bdev_null.so.6.0 00:02:27.531 SO libspdk_bdev_error.so.6.0 00:02:27.531 LIB libspdk_bdev_ftl.a 00:02:27.531 LIB libspdk_bdev_iscsi.a 00:02:27.531 LIB libspdk_bdev_passthru.a 00:02:27.531 SO libspdk_bdev_ftl.so.6.0 00:02:27.531 LIB libspdk_bdev_zone_block.a 00:02:27.531 LIB libspdk_bdev_aio.a 00:02:27.531 SYMLINK libspdk_bdev_gpt.so 00:02:27.531 LIB libspdk_bdev_delay.a 00:02:27.531 SYMLINK libspdk_bdev_split.so 00:02:27.531 SYMLINK libspdk_bdev_error.so 00:02:27.531 SO libspdk_bdev_iscsi.so.6.0 00:02:27.531 SYMLINK libspdk_bdev_null.so 00:02:27.531 SO libspdk_bdev_passthru.so.6.0 00:02:27.531 SO libspdk_bdev_delay.so.6.0 00:02:27.531 SO libspdk_bdev_aio.so.6.0 00:02:27.531 SO libspdk_bdev_zone_block.so.6.0 00:02:27.531 LIB libspdk_bdev_malloc.a 00:02:27.531 SYMLINK libspdk_bdev_ftl.so 00:02:27.789 SO libspdk_bdev_malloc.so.6.0 00:02:27.789 SYMLINK libspdk_bdev_iscsi.so 00:02:27.790 SYMLINK libspdk_bdev_passthru.so 00:02:27.790 SYMLINK libspdk_bdev_aio.so 00:02:27.790 SYMLINK libspdk_bdev_zone_block.so 00:02:27.790 SYMLINK libspdk_bdev_delay.so 00:02:27.790 SYMLINK libspdk_bdev_malloc.so 00:02:27.790 LIB libspdk_bdev_virtio.a 00:02:27.790 LIB libspdk_bdev_lvol.a 00:02:27.790 SO libspdk_bdev_virtio.so.6.0 00:02:27.790 SO libspdk_bdev_lvol.so.6.0 00:02:27.790 SYMLINK libspdk_bdev_virtio.so 00:02:27.790 SYMLINK libspdk_bdev_lvol.so 00:02:28.047 LIB libspdk_bdev_raid.a 00:02:28.306 SO libspdk_bdev_raid.so.6.0 00:02:28.306 SYMLINK libspdk_bdev_raid.so 00:02:29.242 LIB libspdk_bdev_nvme.a 00:02:29.242 SO libspdk_bdev_nvme.so.7.0 00:02:29.501 SYMLINK libspdk_bdev_nvme.so 00:02:30.068 CC module/event/subsystems/iobuf/iobuf.o 00:02:30.068 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:30.068 CC module/event/subsystems/vmd/vmd.o 00:02:30.068 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:30.068 CC module/event/subsystems/scheduler/scheduler.o 00:02:30.068 CC module/event/subsystems/sock/sock.o 00:02:30.068 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:30.068 CC module/event/subsystems/keyring/keyring.o 00:02:30.068 LIB libspdk_event_scheduler.a 00:02:30.068 LIB libspdk_event_vmd.a 00:02:30.068 LIB libspdk_event_iobuf.a 00:02:30.068 LIB libspdk_event_vhost_blk.a 00:02:30.068 LIB libspdk_event_sock.a 00:02:30.068 LIB libspdk_event_keyring.a 00:02:30.068 SO libspdk_event_scheduler.so.4.0 00:02:30.068 SO libspdk_event_iobuf.so.3.0 00:02:30.068 SO libspdk_event_vhost_blk.so.3.0 00:02:30.068 SO libspdk_event_vmd.so.6.0 00:02:30.068 SO libspdk_event_sock.so.5.0 00:02:30.068 SO libspdk_event_keyring.so.1.0 00:02:30.327 SYMLINK libspdk_event_scheduler.so 00:02:30.327 SYMLINK libspdk_event_vhost_blk.so 00:02:30.327 SYMLINK libspdk_event_iobuf.so 00:02:30.327 SYMLINK libspdk_event_vmd.so 00:02:30.327 SYMLINK libspdk_event_keyring.so 00:02:30.327 SYMLINK libspdk_event_sock.so 00:02:30.586 CC module/event/subsystems/accel/accel.o 00:02:30.586 LIB libspdk_event_accel.a 00:02:30.586 SO libspdk_event_accel.so.6.0 00:02:30.586 SYMLINK libspdk_event_accel.so 00:02:31.153 CC module/event/subsystems/bdev/bdev.o 00:02:31.153 LIB libspdk_event_bdev.a 00:02:31.153 SO libspdk_event_bdev.so.6.0 00:02:31.153 SYMLINK libspdk_event_bdev.so 00:02:31.412 CC module/event/subsystems/scsi/scsi.o 00:02:31.412 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.412 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.670 CC module/event/subsystems/nbd/nbd.o 00:02:31.670 CC module/event/subsystems/ublk/ublk.o 00:02:31.670 LIB libspdk_event_ublk.a 00:02:31.670 LIB libspdk_event_scsi.a 00:02:31.670 LIB libspdk_event_nbd.a 00:02:31.670 SO libspdk_event_ublk.so.3.0 00:02:31.671 SO libspdk_event_scsi.so.6.0 00:02:31.671 SO libspdk_event_nbd.so.6.0 00:02:31.671 LIB libspdk_event_nvmf.a 00:02:31.671 SYMLINK libspdk_event_nbd.so 00:02:31.671 SYMLINK libspdk_event_ublk.so 00:02:31.671 SYMLINK libspdk_event_scsi.so 00:02:31.671 SO libspdk_event_nvmf.so.6.0 00:02:31.929 SYMLINK libspdk_event_nvmf.so 00:02:31.929 CC module/event/subsystems/iscsi/iscsi.o 00:02:32.187 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.187 LIB libspdk_event_iscsi.a 00:02:32.187 LIB libspdk_event_vhost_scsi.a 00:02:32.187 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.187 SO libspdk_event_iscsi.so.6.0 00:02:32.187 SYMLINK libspdk_event_iscsi.so 00:02:32.187 SYMLINK libspdk_event_vhost_scsi.so 00:02:32.445 SO libspdk.so.6.0 00:02:32.445 SYMLINK libspdk.so 00:02:32.704 CC test/rpc_client/rpc_client_test.o 00:02:32.704 CXX app/trace/trace.o 00:02:32.704 TEST_HEADER include/spdk/accel.h 00:02:32.704 TEST_HEADER include/spdk/accel_module.h 00:02:32.704 TEST_HEADER include/spdk/barrier.h 00:02:32.704 TEST_HEADER include/spdk/assert.h 00:02:32.704 TEST_HEADER include/spdk/base64.h 00:02:32.704 TEST_HEADER include/spdk/bdev_module.h 00:02:32.704 TEST_HEADER include/spdk/bdev.h 00:02:32.704 TEST_HEADER include/spdk/bit_array.h 00:02:32.704 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.704 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.704 TEST_HEADER include/spdk/bit_pool.h 00:02:32.704 CC app/spdk_top/spdk_top.o 00:02:32.704 TEST_HEADER include/spdk/blobfs.h 00:02:32.704 TEST_HEADER include/spdk/blob.h 00:02:32.704 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.704 TEST_HEADER include/spdk/conf.h 00:02:32.704 CC app/spdk_nvme_identify/identify.o 00:02:32.704 TEST_HEADER include/spdk/crc16.h 00:02:32.704 TEST_HEADER include/spdk/config.h 00:02:32.704 TEST_HEADER include/spdk/cpuset.h 00:02:32.704 TEST_HEADER include/spdk/crc64.h 00:02:32.704 TEST_HEADER include/spdk/crc32.h 00:02:32.704 TEST_HEADER include/spdk/dma.h 00:02:32.704 TEST_HEADER include/spdk/dif.h 00:02:32.704 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.704 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.704 CC app/spdk_nvme_perf/perf.o 00:02:32.704 TEST_HEADER include/spdk/event.h 00:02:32.704 TEST_HEADER include/spdk/endian.h 00:02:32.704 TEST_HEADER include/spdk/env.h 00:02:32.704 TEST_HEADER include/spdk/fd.h 00:02:32.704 TEST_HEADER include/spdk/fd_group.h 00:02:32.704 TEST_HEADER include/spdk/file.h 00:02:32.704 TEST_HEADER include/spdk/ftl.h 00:02:32.704 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.704 TEST_HEADER include/spdk/hexlify.h 00:02:32.704 TEST_HEADER include/spdk/histogram_data.h 00:02:32.704 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.704 TEST_HEADER include/spdk/init.h 00:02:32.704 TEST_HEADER include/spdk/idxd.h 00:02:32.704 TEST_HEADER include/spdk/ioat.h 00:02:32.704 CC app/trace_record/trace_record.o 00:02:32.704 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.704 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.704 TEST_HEADER include/spdk/json.h 00:02:32.704 TEST_HEADER include/spdk/keyring.h 00:02:32.704 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.704 TEST_HEADER include/spdk/likely.h 00:02:32.704 TEST_HEADER include/spdk/keyring_module.h 00:02:32.704 CC app/spdk_lspci/spdk_lspci.o 00:02:32.704 TEST_HEADER include/spdk/lvol.h 00:02:32.704 TEST_HEADER include/spdk/log.h 00:02:32.704 TEST_HEADER include/spdk/mmio.h 00:02:32.704 TEST_HEADER include/spdk/memory.h 00:02:32.704 TEST_HEADER include/spdk/nvme.h 00:02:32.704 TEST_HEADER include/spdk/notify.h 00:02:32.704 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.704 TEST_HEADER include/spdk/nbd.h 00:02:32.704 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.704 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.704 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.704 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.704 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.704 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.704 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.704 TEST_HEADER include/spdk/nvmf.h 00:02:32.704 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.704 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.704 TEST_HEADER include/spdk/opal_spec.h 00:02:32.704 TEST_HEADER include/spdk/pipe.h 00:02:32.704 TEST_HEADER include/spdk/pci_ids.h 00:02:32.704 TEST_HEADER include/spdk/opal.h 00:02:32.704 TEST_HEADER include/spdk/queue.h 00:02:32.704 TEST_HEADER include/spdk/reduce.h 00:02:32.704 TEST_HEADER include/spdk/rpc.h 00:02:32.704 TEST_HEADER include/spdk/scheduler.h 00:02:32.704 TEST_HEADER include/spdk/scsi.h 00:02:32.704 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.704 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.704 TEST_HEADER include/spdk/stdinc.h 00:02:32.704 TEST_HEADER include/spdk/sock.h 00:02:32.704 TEST_HEADER include/spdk/thread.h 00:02:32.704 TEST_HEADER include/spdk/string.h 00:02:32.704 TEST_HEADER include/spdk/trace.h 00:02:32.704 TEST_HEADER include/spdk/trace_parser.h 00:02:32.704 TEST_HEADER include/spdk/tree.h 00:02:32.704 TEST_HEADER include/spdk/ublk.h 00:02:32.704 TEST_HEADER include/spdk/util.h 00:02:32.704 CC app/spdk_dd/spdk_dd.o 00:02:32.704 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:32.704 TEST_HEADER include/spdk/version.h 00:02:32.704 TEST_HEADER include/spdk/uuid.h 00:02:32.704 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:32.704 TEST_HEADER include/spdk/vmd.h 00:02:32.704 TEST_HEADER include/spdk/xor.h 00:02:32.705 TEST_HEADER include/spdk/vhost.h 00:02:32.705 TEST_HEADER include/spdk/zipf.h 00:02:32.705 CXX test/cpp_headers/accel_module.o 00:02:32.705 CXX test/cpp_headers/assert.o 00:02:32.705 CXX test/cpp_headers/barrier.o 00:02:32.705 CXX test/cpp_headers/base64.o 00:02:32.705 CXX test/cpp_headers/bdev.o 00:02:32.705 CXX test/cpp_headers/accel.o 00:02:32.705 CXX test/cpp_headers/bdev_module.o 00:02:32.705 CXX test/cpp_headers/bdev_zone.o 00:02:32.705 CXX test/cpp_headers/bit_array.o 00:02:32.705 CXX test/cpp_headers/bit_pool.o 00:02:32.705 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.705 CXX test/cpp_headers/blob_bdev.o 00:02:32.705 CXX test/cpp_headers/blobfs.o 00:02:32.705 CXX test/cpp_headers/blob.o 00:02:32.705 CXX test/cpp_headers/conf.o 00:02:32.705 CXX test/cpp_headers/config.o 00:02:32.705 CXX test/cpp_headers/cpuset.o 00:02:32.705 CXX test/cpp_headers/crc16.o 00:02:32.705 CC app/nvmf_tgt/nvmf_main.o 00:02:32.705 CXX test/cpp_headers/crc32.o 00:02:32.705 CXX test/cpp_headers/dif.o 00:02:32.705 CXX test/cpp_headers/crc64.o 00:02:32.705 CXX test/cpp_headers/endian.o 00:02:32.705 CXX test/cpp_headers/env_dpdk.o 00:02:32.705 CXX test/cpp_headers/dma.o 00:02:32.705 CXX test/cpp_headers/env.o 00:02:32.705 CXX test/cpp_headers/event.o 00:02:32.974 CXX test/cpp_headers/fd.o 00:02:32.974 CXX test/cpp_headers/file.o 00:02:32.974 CXX test/cpp_headers/gpt_spec.o 00:02:32.974 CXX test/cpp_headers/fd_group.o 00:02:32.974 CXX test/cpp_headers/hexlify.o 00:02:32.974 CXX test/cpp_headers/histogram_data.o 00:02:32.974 CXX test/cpp_headers/idxd.o 00:02:32.974 CXX test/cpp_headers/ftl.o 00:02:32.974 CXX test/cpp_headers/init.o 00:02:32.974 CXX test/cpp_headers/ioat.o 00:02:32.974 CXX test/cpp_headers/idxd_spec.o 00:02:32.974 CXX test/cpp_headers/iscsi_spec.o 00:02:32.974 CXX test/cpp_headers/ioat_spec.o 00:02:32.974 CXX test/cpp_headers/json.o 00:02:32.974 CXX test/cpp_headers/jsonrpc.o 00:02:32.974 CXX test/cpp_headers/keyring.o 00:02:32.974 CXX test/cpp_headers/likely.o 00:02:32.974 CC app/spdk_tgt/spdk_tgt.o 00:02:32.974 CXX test/cpp_headers/log.o 00:02:32.974 CXX test/cpp_headers/keyring_module.o 00:02:32.974 CXX test/cpp_headers/memory.o 00:02:32.974 CXX test/cpp_headers/lvol.o 00:02:32.974 CXX test/cpp_headers/mmio.o 00:02:32.974 CXX test/cpp_headers/nbd.o 00:02:32.974 CXX test/cpp_headers/notify.o 00:02:32.974 CXX test/cpp_headers/nvme.o 00:02:32.974 CXX test/cpp_headers/nvme_intel.o 00:02:32.974 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.974 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.974 CXX test/cpp_headers/nvme_spec.o 00:02:32.974 CXX test/cpp_headers/nvme_zns.o 00:02:32.974 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.974 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.974 CXX test/cpp_headers/nvmf.o 00:02:32.974 CXX test/cpp_headers/nvmf_transport.o 00:02:32.974 CXX test/cpp_headers/nvmf_spec.o 00:02:32.974 CXX test/cpp_headers/opal.o 00:02:32.974 CXX test/cpp_headers/opal_spec.o 00:02:32.974 CXX test/cpp_headers/pipe.o 00:02:32.974 CXX test/cpp_headers/pci_ids.o 00:02:32.974 CXX test/cpp_headers/queue.o 00:02:32.974 CXX test/cpp_headers/reduce.o 00:02:32.974 CC test/app/histogram_perf/histogram_perf.o 00:02:32.974 CC test/env/pci/pci_ut.o 00:02:32.974 CC test/app/stub/stub.o 00:02:32.974 CC test/app/jsoncat/jsoncat.o 00:02:32.974 CC test/thread/poller_perf/poller_perf.o 00:02:32.974 CC examples/ioat/perf/perf.o 00:02:32.974 CC test/env/vtophys/vtophys.o 00:02:32.974 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:32.974 CC examples/util/zipf/zipf.o 00:02:32.974 CXX test/cpp_headers/rpc.o 00:02:32.974 CC examples/ioat/verify/verify.o 00:02:32.974 CC test/app/bdev_svc/bdev_svc.o 00:02:32.974 CC test/env/memory/memory_ut.o 00:02:32.974 CC test/dma/test_dma/test_dma.o 00:02:32.974 CC app/fio/nvme/fio_plugin.o 00:02:33.236 CC app/fio/bdev/fio_plugin.o 00:02:33.236 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:33.236 LINK spdk_lspci 00:02:33.236 LINK spdk_nvme_discover 00:02:33.236 LINK rpc_client_test 00:02:33.499 CC test/env/mem_callbacks/mem_callbacks.o 00:02:33.499 LINK vtophys 00:02:33.499 LINK interrupt_tgt 00:02:33.499 LINK iscsi_tgt 00:02:33.499 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:33.499 CXX test/cpp_headers/scheduler.o 00:02:33.499 CXX test/cpp_headers/scsi.o 00:02:33.499 LINK env_dpdk_post_init 00:02:33.499 CXX test/cpp_headers/scsi_spec.o 00:02:33.499 LINK zipf 00:02:33.499 CXX test/cpp_headers/sock.o 00:02:33.499 LINK nvmf_tgt 00:02:33.499 CXX test/cpp_headers/string.o 00:02:33.499 CXX test/cpp_headers/thread.o 00:02:33.499 CXX test/cpp_headers/stdinc.o 00:02:33.499 CXX test/cpp_headers/trace.o 00:02:33.499 LINK spdk_tgt 00:02:33.499 LINK stub 00:02:33.499 CXX test/cpp_headers/trace_parser.o 00:02:33.499 CXX test/cpp_headers/tree.o 00:02:33.499 CXX test/cpp_headers/ublk.o 00:02:33.499 CXX test/cpp_headers/util.o 00:02:33.499 CXX test/cpp_headers/uuid.o 00:02:33.499 CXX test/cpp_headers/version.o 00:02:33.499 CXX test/cpp_headers/vfio_user_pci.o 00:02:33.499 CXX test/cpp_headers/vfio_user_spec.o 00:02:33.499 CXX test/cpp_headers/vhost.o 00:02:33.499 CXX test/cpp_headers/vmd.o 00:02:33.499 LINK jsoncat 00:02:33.499 CXX test/cpp_headers/xor.o 00:02:33.499 CXX test/cpp_headers/zipf.o 00:02:33.499 LINK poller_perf 00:02:33.499 LINK histogram_perf 00:02:33.499 LINK bdev_svc 00:02:33.499 LINK spdk_trace_record 00:02:33.499 LINK ioat_perf 00:02:33.499 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:33.499 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:33.757 LINK verify 00:02:33.757 LINK spdk_dd 00:02:33.757 LINK spdk_trace 00:02:34.015 CC examples/sock/hello_world/hello_sock.o 00:02:34.015 LINK pci_ut 00:02:34.015 LINK spdk_bdev 00:02:34.015 LINK test_dma 00:02:34.015 CC examples/vmd/led/led.o 00:02:34.015 CC examples/vmd/lsvmd/lsvmd.o 00:02:34.015 LINK nvme_fuzz 00:02:34.015 CC examples/idxd/perf/perf.o 00:02:34.015 CC examples/thread/thread/thread_ex.o 00:02:34.015 CC test/event/reactor/reactor.o 00:02:34.015 CC test/event/reactor_perf/reactor_perf.o 00:02:34.015 CC test/event/event_perf/event_perf.o 00:02:34.015 LINK vhost_fuzz 00:02:34.015 LINK spdk_nvme 00:02:34.015 CC test/event/app_repeat/app_repeat.o 00:02:34.015 LINK mem_callbacks 00:02:34.015 CC test/event/scheduler/scheduler.o 00:02:34.015 LINK lsvmd 00:02:34.015 CC app/vhost/vhost.o 00:02:34.015 LINK led 00:02:34.273 LINK spdk_nvme_identify 00:02:34.273 LINK reactor 00:02:34.273 LINK event_perf 00:02:34.273 LINK reactor_perf 00:02:34.273 LINK hello_sock 00:02:34.273 LINK spdk_nvme_perf 00:02:34.273 LINK app_repeat 00:02:34.273 LINK thread 00:02:34.273 LINK spdk_top 00:02:34.273 LINK vhost 00:02:34.273 LINK scheduler 00:02:34.273 LINK idxd_perf 00:02:34.273 CC test/nvme/cuse/cuse.o 00:02:34.273 CC test/nvme/sgl/sgl.o 00:02:34.273 CC test/nvme/reset/reset.o 00:02:34.273 CC test/nvme/aer/aer.o 00:02:34.273 CC test/nvme/e2edp/nvme_dp.o 00:02:34.273 CC test/nvme/err_injection/err_injection.o 00:02:34.273 CC test/nvme/boot_partition/boot_partition.o 00:02:34.273 CC test/nvme/connect_stress/connect_stress.o 00:02:34.273 CC test/nvme/startup/startup.o 00:02:34.273 CC test/nvme/fdp/fdp.o 00:02:34.273 CC test/nvme/fused_ordering/fused_ordering.o 00:02:34.273 CC test/nvme/reserve/reserve.o 00:02:34.273 CC test/nvme/overhead/overhead.o 00:02:34.273 CC test/nvme/compliance/nvme_compliance.o 00:02:34.273 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:34.273 CC test/blobfs/mkfs/mkfs.o 00:02:34.273 CC test/nvme/simple_copy/simple_copy.o 00:02:34.273 CC test/accel/dif/dif.o 00:02:34.532 LINK memory_ut 00:02:34.532 CC test/lvol/esnap/esnap.o 00:02:34.532 LINK boot_partition 00:02:34.532 LINK startup 00:02:34.532 LINK connect_stress 00:02:34.532 LINK err_injection 00:02:34.532 LINK fused_ordering 00:02:34.532 LINK doorbell_aers 00:02:34.532 LINK mkfs 00:02:34.532 LINK reserve 00:02:34.532 CC examples/nvme/hello_world/hello_world.o 00:02:34.532 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:34.532 LINK sgl 00:02:34.532 CC examples/nvme/abort/abort.o 00:02:34.532 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:34.532 CC examples/nvme/hotplug/hotplug.o 00:02:34.532 CC examples/nvme/reconnect/reconnect.o 00:02:34.532 CC examples/nvme/arbitration/arbitration.o 00:02:34.532 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:34.532 LINK reset 00:02:34.532 LINK nvme_dp 00:02:34.532 LINK simple_copy 00:02:34.791 LINK aer 00:02:34.791 LINK overhead 00:02:34.791 CC examples/accel/perf/accel_perf.o 00:02:34.791 LINK fdp 00:02:34.791 LINK nvme_compliance 00:02:34.791 CC examples/blob/cli/blobcli.o 00:02:34.791 CC examples/blob/hello_world/hello_blob.o 00:02:34.791 LINK cmb_copy 00:02:34.791 LINK pmr_persistence 00:02:34.791 LINK hello_world 00:02:34.791 LINK dif 00:02:34.791 LINK hotplug 00:02:35.050 LINK arbitration 00:02:35.050 LINK reconnect 00:02:35.050 LINK hello_blob 00:02:35.050 LINK abort 00:02:35.050 LINK nvme_manage 00:02:35.050 LINK accel_perf 00:02:35.307 LINK iscsi_fuzz 00:02:35.307 LINK blobcli 00:02:35.307 CC test/bdev/bdevio/bdevio.o 00:02:35.565 LINK cuse 00:02:35.565 CC examples/bdev/hello_world/hello_bdev.o 00:02:35.565 CC examples/bdev/bdevperf/bdevperf.o 00:02:35.822 LINK bdevio 00:02:35.822 LINK hello_bdev 00:02:36.388 LINK bdevperf 00:02:36.953 CC examples/nvmf/nvmf/nvmf.o 00:02:37.211 LINK nvmf 00:02:39.114 LINK esnap 00:02:39.372 00:02:39.372 real 0m47.517s 00:02:39.372 user 6m53.972s 00:02:39.372 sys 3m16.402s 00:02:39.372 11:08:25 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:39.372 11:08:25 make -- common/autotest_common.sh@10 -- $ set +x 00:02:39.372 ************************************ 00:02:39.372 END TEST make 00:02:39.372 ************************************ 00:02:39.372 11:08:25 -- common/autotest_common.sh@1142 -- $ return 0 00:02:39.372 11:08:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:39.372 11:08:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:39.372 11:08:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:39.372 11:08:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.372 11:08:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:39.372 11:08:25 -- pm/common@44 -- $ pid=617736 00:02:39.372 11:08:25 -- pm/common@50 -- $ kill -TERM 617736 00:02:39.372 11:08:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.372 11:08:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:39.372 11:08:25 -- pm/common@44 -- $ pid=617737 00:02:39.372 11:08:25 -- pm/common@50 -- $ kill -TERM 617737 00:02:39.372 11:08:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.372 11:08:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:39.372 11:08:25 -- pm/common@44 -- $ pid=617739 00:02:39.372 11:08:25 -- pm/common@50 -- $ kill -TERM 617739 00:02:39.372 11:08:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.372 11:08:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:39.372 11:08:25 -- pm/common@44 -- $ pid=617765 00:02:39.372 11:08:25 -- pm/common@50 -- $ sudo -E kill -TERM 617765 00:02:39.372 11:08:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:39.372 11:08:25 -- nvmf/common.sh@7 -- # uname -s 00:02:39.372 11:08:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:39.372 11:08:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:39.372 11:08:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:39.372 11:08:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:39.372 11:08:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:39.372 11:08:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:39.372 11:08:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:39.372 11:08:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:39.372 11:08:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:39.372 11:08:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:39.372 11:08:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:39.372 11:08:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:39.372 11:08:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:39.372 11:08:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:39.372 11:08:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:39.372 11:08:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:39.372 11:08:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:39.372 11:08:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:39.372 11:08:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:39.372 11:08:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:39.372 11:08:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.372 11:08:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.372 11:08:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.372 11:08:25 -- paths/export.sh@5 -- # export PATH 00:02:39.372 11:08:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.372 11:08:25 -- nvmf/common.sh@47 -- # : 0 00:02:39.372 11:08:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:39.372 11:08:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:39.372 11:08:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:39.372 11:08:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:39.372 11:08:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:39.372 11:08:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:39.630 11:08:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:39.630 11:08:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:39.630 11:08:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:39.630 11:08:25 -- spdk/autotest.sh@32 -- # uname -s 00:02:39.630 11:08:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:39.630 11:08:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:39.630 11:08:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.630 11:08:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:39.630 11:08:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:39.630 11:08:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:39.630 11:08:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:39.630 11:08:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:39.630 11:08:25 -- spdk/autotest.sh@48 -- # udevadm_pid=676791 00:02:39.630 11:08:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:39.630 11:08:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:39.630 11:08:25 -- pm/common@17 -- # local monitor 00:02:39.630 11:08:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.630 11:08:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.630 11:08:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.630 11:08:25 -- pm/common@21 -- # date +%s 00:02:39.630 11:08:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.630 11:08:25 -- pm/common@21 -- # date +%s 00:02:39.630 11:08:25 -- pm/common@25 -- # sleep 1 00:02:39.630 11:08:25 -- pm/common@21 -- # date +%s 00:02:39.630 11:08:25 -- pm/common@21 -- # date +%s 00:02:39.630 11:08:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775305 00:02:39.630 11:08:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775305 00:02:39.630 11:08:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775305 00:02:39.630 11:08:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775305 00:02:39.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775305_collect-cpu-temp.pm.log 00:02:39.631 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775305_collect-vmstat.pm.log 00:02:39.631 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775305_collect-cpu-load.pm.log 00:02:39.631 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775305_collect-bmc-pm.bmc.pm.log 00:02:40.567 11:08:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:40.567 11:08:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:40.567 11:08:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:40.567 11:08:26 -- common/autotest_common.sh@10 -- # set +x 00:02:40.567 11:08:26 -- spdk/autotest.sh@59 -- # create_test_list 00:02:40.567 11:08:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:40.567 11:08:26 -- common/autotest_common.sh@10 -- # set +x 00:02:40.567 11:08:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:40.567 11:08:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.567 11:08:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.567 11:08:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:40.567 11:08:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.567 11:08:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:40.567 11:08:26 -- common/autotest_common.sh@1455 -- # uname 00:02:40.567 11:08:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:40.567 11:08:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:40.567 11:08:26 -- common/autotest_common.sh@1475 -- # uname 00:02:40.567 11:08:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:40.567 11:08:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:40.567 11:08:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:40.567 11:08:26 -- spdk/autotest.sh@72 -- # hash lcov 00:02:40.567 11:08:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:40.567 11:08:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:40.567 --rc lcov_branch_coverage=1 00:02:40.567 --rc lcov_function_coverage=1 00:02:40.567 --rc genhtml_branch_coverage=1 00:02:40.567 --rc genhtml_function_coverage=1 00:02:40.567 --rc genhtml_legend=1 00:02:40.567 --rc geninfo_all_blocks=1 00:02:40.567 ' 00:02:40.567 11:08:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:40.567 --rc lcov_branch_coverage=1 00:02:40.567 --rc lcov_function_coverage=1 00:02:40.567 --rc genhtml_branch_coverage=1 00:02:40.567 --rc genhtml_function_coverage=1 00:02:40.567 --rc genhtml_legend=1 00:02:40.567 --rc geninfo_all_blocks=1 00:02:40.567 ' 00:02:40.567 11:08:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:40.567 --rc lcov_branch_coverage=1 00:02:40.567 --rc lcov_function_coverage=1 00:02:40.567 --rc genhtml_branch_coverage=1 00:02:40.567 --rc genhtml_function_coverage=1 00:02:40.567 --rc genhtml_legend=1 00:02:40.567 --rc geninfo_all_blocks=1 00:02:40.567 --no-external' 00:02:40.567 11:08:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:40.567 --rc lcov_branch_coverage=1 00:02:40.567 --rc lcov_function_coverage=1 00:02:40.567 --rc genhtml_branch_coverage=1 00:02:40.567 --rc genhtml_function_coverage=1 00:02:40.567 --rc genhtml_legend=1 00:02:40.567 --rc geninfo_all_blocks=1 00:02:40.567 --no-external' 00:02:40.567 11:08:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:40.567 lcov: LCOV version 1.14 00:02:40.568 11:08:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:44.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:44.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:44.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:44.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:44.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:44.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:45.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:45.016 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:45.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:45.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:45.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:45.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:45.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:45.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:45.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:45.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:45.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:59.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:59.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:04.062 11:08:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:04.062 11:08:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:04.062 11:08:50 -- common/autotest_common.sh@10 -- # set +x 00:03:04.062 11:08:50 -- spdk/autotest.sh@91 -- # rm -f 00:03:04.062 11:08:50 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.612 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:06.612 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.612 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.613 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.613 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.613 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.613 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.613 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.870 11:08:53 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:06.870 11:08:53 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:06.870 11:08:53 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:06.870 11:08:53 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:06.870 11:08:53 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:06.870 11:08:53 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:06.870 11:08:53 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:06.870 11:08:53 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.870 11:08:53 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:06.870 11:08:53 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:06.870 11:08:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.870 11:08:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:06.870 11:08:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:06.870 11:08:53 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:06.870 11:08:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.870 No valid GPT data, bailing 00:03:06.870 11:08:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.870 11:08:53 -- scripts/common.sh@391 -- # pt= 00:03:06.870 11:08:53 -- scripts/common.sh@392 -- # return 1 00:03:06.870 11:08:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.870 1+0 records in 00:03:06.870 1+0 records out 00:03:06.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458872 s, 229 MB/s 00:03:06.870 11:08:53 -- spdk/autotest.sh@118 -- # sync 00:03:06.870 11:08:53 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.870 11:08:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.870 11:08:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:12.135 11:08:58 -- spdk/autotest.sh@124 -- # uname -s 00:03:12.135 11:08:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:12.135 11:08:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:12.135 11:08:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.135 11:08:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.135 11:08:58 -- common/autotest_common.sh@10 -- # set +x 00:03:12.135 ************************************ 00:03:12.135 START TEST setup.sh 00:03:12.135 ************************************ 00:03:12.135 11:08:58 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:12.135 * Looking for test storage... 00:03:12.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.135 11:08:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:12.135 11:08:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:12.135 11:08:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:12.135 11:08:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.135 11:08:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.135 11:08:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.135 ************************************ 00:03:12.135 START TEST acl 00:03:12.135 ************************************ 00:03:12.135 11:08:58 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:12.393 * Looking for test storage... 00:03:12.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.393 11:08:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:12.394 11:08:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:12.394 11:08:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:12.394 11:08:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:12.394 11:08:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:12.394 11:08:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:12.394 11:08:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:12.394 11:08:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.394 11:08:58 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.677 11:09:01 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:15.677 11:09:01 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:15.677 11:09:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.677 11:09:01 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:15.677 11:09:01 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.677 11:09:01 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:18.206 Hugepages 00:03:18.206 node hugesize free / total 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 00:03:18.206 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:18.206 11:09:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:18.206 11:09:04 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.206 11:09:04 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.206 11:09:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.206 ************************************ 00:03:18.206 START TEST denied 00:03:18.206 ************************************ 00:03:18.206 11:09:04 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:18.206 11:09:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:18.206 11:09:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:18.206 11:09:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:18.206 11:09:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.206 11:09:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.486 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.487 11:09:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.674 00:03:25.674 real 0m6.885s 00:03:25.674 user 0m2.261s 00:03:25.674 sys 0m3.984s 00:03:25.674 11:09:11 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.674 11:09:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:25.674 ************************************ 00:03:25.674 END TEST denied 00:03:25.674 ************************************ 00:03:25.674 11:09:11 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:25.674 11:09:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:25.674 11:09:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.674 11:09:11 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.674 11:09:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.674 ************************************ 00:03:25.674 START TEST allowed 00:03:25.674 ************************************ 00:03:25.674 11:09:11 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:25.674 11:09:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:25.674 11:09:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:25.674 11:09:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:25.674 11:09:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.674 11:09:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.955 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.955 11:09:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:28.955 11:09:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:28.955 11:09:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:28.955 11:09:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.955 11:09:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.484 00:03:31.484 real 0m6.468s 00:03:31.484 user 0m1.873s 00:03:31.484 sys 0m3.536s 00:03:31.484 11:09:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.484 11:09:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:31.484 ************************************ 00:03:31.484 END TEST allowed 00:03:31.484 ************************************ 00:03:31.484 11:09:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:31.484 00:03:31.484 real 0m19.398s 00:03:31.484 user 0m6.431s 00:03:31.484 sys 0m11.471s 00:03:31.484 11:09:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.484 11:09:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.484 ************************************ 00:03:31.484 END TEST acl 00:03:31.484 ************************************ 00:03:31.744 11:09:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:31.744 11:09:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.744 11:09:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.744 11:09:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.744 11:09:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.744 ************************************ 00:03:31.744 START TEST hugepages 00:03:31.744 ************************************ 00:03:31.744 11:09:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.744 * Looking for test storage... 00:03:31.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.744 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 174428600 kB' 'MemAvailable: 177257944 kB' 'Buffers: 3896 kB' 'Cached: 9155332 kB' 'SwapCached: 0 kB' 'Active: 6204580 kB' 'Inactive: 3483592 kB' 'Active(anon): 5817768 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532328 kB' 'Mapped: 151528 kB' 'Shmem: 5288824 kB' 'KReclaimable: 206308 kB' 'Slab: 704476 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 498168 kB' 'KernelStack: 20400 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 7323008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314760 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.745 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.746 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:31.747 11:09:18 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:31.747 11:09:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.747 11:09:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.747 11:09:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.747 ************************************ 00:03:31.747 START TEST default_setup 00:03:31.747 ************************************ 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.747 11:09:18 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.283 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.283 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.540 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.540 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:35.476 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.476 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:35.476 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:35.476 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.476 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.476 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:35.476 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:35.476 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176614124 kB' 'MemAvailable: 179443472 kB' 'Buffers: 3896 kB' 'Cached: 9155432 kB' 'SwapCached: 0 kB' 'Active: 6218440 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831628 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545644 kB' 'Mapped: 151584 kB' 'Shmem: 5288924 kB' 'KReclaimable: 206316 kB' 'Slab: 703460 kB' 'SReclaimable: 206316 kB' 'SUnreclaim: 497144 kB' 'KernelStack: 20320 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7336212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314584 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.477 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176615132 kB' 'MemAvailable: 179444476 kB' 'Buffers: 3896 kB' 'Cached: 9155436 kB' 'SwapCached: 0 kB' 'Active: 6217700 kB' 'Inactive: 3483592 kB' 'Active(anon): 5830888 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545352 kB' 'Mapped: 151484 kB' 'Shmem: 5288928 kB' 'KReclaimable: 206308 kB' 'Slab: 703360 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 497052 kB' 'KernelStack: 20304 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7336232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314552 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.478 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.479 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.480 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176615580 kB' 'MemAvailable: 179444924 kB' 'Buffers: 3896 kB' 'Cached: 9155452 kB' 'SwapCached: 0 kB' 'Active: 6217736 kB' 'Inactive: 3483592 kB' 'Active(anon): 5830924 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545352 kB' 'Mapped: 151484 kB' 'Shmem: 5288944 kB' 'KReclaimable: 206308 kB' 'Slab: 703360 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 497052 kB' 'KernelStack: 20304 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7336252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314568 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.481 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.482 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.483 nr_hugepages=1024 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.483 resv_hugepages=0 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.483 surplus_hugepages=0 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.483 anon_hugepages=0 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176615636 kB' 'MemAvailable: 179444980 kB' 'Buffers: 3896 kB' 'Cached: 9155472 kB' 'SwapCached: 0 kB' 'Active: 6218240 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831428 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545924 kB' 'Mapped: 151484 kB' 'Shmem: 5288964 kB' 'KReclaimable: 206308 kB' 'Slab: 703360 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 497052 kB' 'KernelStack: 20336 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7338888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314600 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.483 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.484 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92831756 kB' 'MemUsed: 4783872 kB' 'SwapCached: 0 kB' 'Active: 1095896 kB' 'Inactive: 223464 kB' 'Active(anon): 916704 kB' 'Inactive(anon): 0 kB' 'Active(file): 179192 kB' 'Inactive(file): 223464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1119864 kB' 'Mapped: 51280 kB' 'AnonPages: 202844 kB' 'Shmem: 717208 kB' 'KernelStack: 10824 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76284 kB' 'Slab: 291104 kB' 'SReclaimable: 76284 kB' 'SUnreclaim: 214820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.485 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.486 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.487 node0=1024 expecting 1024 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.487 00:03:35.487 real 0m3.707s 00:03:35.487 user 0m1.107s 00:03:35.487 sys 0m1.770s 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.487 11:09:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:35.487 ************************************ 00:03:35.487 END TEST default_setup 00:03:35.487 ************************************ 00:03:35.487 11:09:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.487 11:09:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:35.487 11:09:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.487 11:09:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.487 11:09:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.744 ************************************ 00:03:35.744 START TEST per_node_1G_alloc 00:03:35.744 ************************************ 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:35.744 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.745 11:09:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.282 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.282 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.282 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176669912 kB' 'MemAvailable: 179499256 kB' 'Buffers: 3896 kB' 'Cached: 9155572 kB' 'SwapCached: 0 kB' 'Active: 6219372 kB' 'Inactive: 3483592 kB' 'Active(anon): 5832560 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546336 kB' 'Mapped: 151548 kB' 'Shmem: 5289064 kB' 'KReclaimable: 206308 kB' 'Slab: 702780 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 496472 kB' 'KernelStack: 20336 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7336876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314792 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.282 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176670860 kB' 'MemAvailable: 179500204 kB' 'Buffers: 3896 kB' 'Cached: 9155576 kB' 'SwapCached: 0 kB' 'Active: 6218612 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831800 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546056 kB' 'Mapped: 151460 kB' 'Shmem: 5289068 kB' 'KReclaimable: 206308 kB' 'Slab: 702796 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 496488 kB' 'KernelStack: 20336 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7336896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314744 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.285 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176670104 kB' 'MemAvailable: 179499448 kB' 'Buffers: 3896 kB' 'Cached: 9155592 kB' 'SwapCached: 0 kB' 'Active: 6218628 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831816 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546052 kB' 'Mapped: 151460 kB' 'Shmem: 5289084 kB' 'KReclaimable: 206308 kB' 'Slab: 702796 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 496488 kB' 'KernelStack: 20336 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7336916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314760 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.286 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.287 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.288 nr_hugepages=1024 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.288 resv_hugepages=0 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.288 surplus_hugepages=0 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.288 anon_hugepages=0 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176669348 kB' 'MemAvailable: 179498692 kB' 'Buffers: 3896 kB' 'Cached: 9155592 kB' 'SwapCached: 0 kB' 'Active: 6219316 kB' 'Inactive: 3483592 kB' 'Active(anon): 5832504 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546672 kB' 'Mapped: 151460 kB' 'Shmem: 5289084 kB' 'KReclaimable: 206308 kB' 'Slab: 702796 kB' 'SReclaimable: 206308 kB' 'SUnreclaim: 496488 kB' 'KernelStack: 20288 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7339556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314808 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.288 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.289 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93909584 kB' 'MemUsed: 3706044 kB' 'SwapCached: 0 kB' 'Active: 1095872 kB' 'Inactive: 223464 kB' 'Active(anon): 916680 kB' 'Inactive(anon): 0 kB' 'Active(file): 179192 kB' 'Inactive(file): 223464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1119880 kB' 'Mapped: 51280 kB' 'AnonPages: 202644 kB' 'Shmem: 717224 kB' 'KernelStack: 10776 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76284 kB' 'Slab: 290744 kB' 'SReclaimable: 76284 kB' 'SUnreclaim: 214460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.290 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82760148 kB' 'MemUsed: 11005380 kB' 'SwapCached: 0 kB' 'Active: 5123304 kB' 'Inactive: 3260128 kB' 'Active(anon): 4915684 kB' 'Inactive(anon): 0 kB' 'Active(file): 207620 kB' 'Inactive(file): 3260128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8039676 kB' 'Mapped: 100180 kB' 'AnonPages: 343848 kB' 'Shmem: 4571928 kB' 'KernelStack: 9816 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130024 kB' 'Slab: 412052 kB' 'SReclaimable: 130024 kB' 'SUnreclaim: 282028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.291 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.292 node0=512 expecting 512 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:38.292 node1=512 expecting 512 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:38.292 00:03:38.292 real 0m2.698s 00:03:38.292 user 0m1.060s 00:03:38.292 sys 0m1.671s 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.292 11:09:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.292 ************************************ 00:03:38.292 END TEST per_node_1G_alloc 00:03:38.292 ************************************ 00:03:38.292 11:09:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:38.292 11:09:24 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:38.292 11:09:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.292 11:09:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.292 11:09:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.292 ************************************ 00:03:38.292 START TEST even_2G_alloc 00:03:38.292 ************************************ 00:03:38.292 11:09:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:38.292 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:38.292 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.293 11:09:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.583 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.583 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.583 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176677560 kB' 'MemAvailable: 179506896 kB' 'Buffers: 3896 kB' 'Cached: 9155720 kB' 'SwapCached: 0 kB' 'Active: 6218128 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831316 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545244 kB' 'Mapped: 150604 kB' 'Shmem: 5289212 kB' 'KReclaimable: 206292 kB' 'Slab: 702552 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 496260 kB' 'KernelStack: 20384 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7329964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.583 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176678144 kB' 'MemAvailable: 179507480 kB' 'Buffers: 3896 kB' 'Cached: 9155724 kB' 'SwapCached: 0 kB' 'Active: 6218596 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831784 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546196 kB' 'Mapped: 150604 kB' 'Shmem: 5289216 kB' 'KReclaimable: 206292 kB' 'Slab: 702472 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 496180 kB' 'KernelStack: 20592 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7329456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.584 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.585 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176678948 kB' 'MemAvailable: 179508284 kB' 'Buffers: 3896 kB' 'Cached: 9155740 kB' 'SwapCached: 0 kB' 'Active: 6218500 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831688 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545560 kB' 'Mapped: 151040 kB' 'Shmem: 5289232 kB' 'KReclaimable: 206292 kB' 'Slab: 702452 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 496160 kB' 'KernelStack: 20464 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7329636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.586 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.587 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.588 nr_hugepages=1024 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.588 resv_hugepages=0 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.588 surplus_hugepages=0 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.588 anon_hugepages=0 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176673316 kB' 'MemAvailable: 179502652 kB' 'Buffers: 3896 kB' 'Cached: 9155764 kB' 'SwapCached: 0 kB' 'Active: 6222568 kB' 'Inactive: 3483592 kB' 'Active(anon): 5835756 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550148 kB' 'Mapped: 151040 kB' 'Shmem: 5289256 kB' 'KReclaimable: 206292 kB' 'Slab: 702452 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 496160 kB' 'KernelStack: 20192 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7333304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314668 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.588 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.589 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93895740 kB' 'MemUsed: 3719888 kB' 'SwapCached: 0 kB' 'Active: 1102332 kB' 'Inactive: 223464 kB' 'Active(anon): 923140 kB' 'Inactive(anon): 0 kB' 'Active(file): 179192 kB' 'Inactive(file): 223464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1119912 kB' 'Mapped: 50992 kB' 'AnonPages: 209036 kB' 'Shmem: 717256 kB' 'KernelStack: 10840 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76284 kB' 'Slab: 290512 kB' 'SReclaimable: 76284 kB' 'SUnreclaim: 214228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.590 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82777040 kB' 'MemUsed: 10988488 kB' 'SwapCached: 0 kB' 'Active: 5120980 kB' 'Inactive: 3260128 kB' 'Active(anon): 4913360 kB' 'Inactive(anon): 0 kB' 'Active(file): 207620 kB' 'Inactive(file): 3260128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8039776 kB' 'Mapped: 99816 kB' 'AnonPages: 341432 kB' 'Shmem: 4572028 kB' 'KernelStack: 9496 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130008 kB' 'Slab: 411972 kB' 'SReclaimable: 130008 kB' 'SUnreclaim: 281964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.591 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.592 node0=512 expecting 512 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:41.592 node1=512 expecting 512 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.592 00:03:41.592 real 0m2.987s 00:03:41.592 user 0m1.200s 00:03:41.592 sys 0m1.855s 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.592 11:09:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.592 ************************************ 00:03:41.592 END TEST even_2G_alloc 00:03:41.592 ************************************ 00:03:41.592 11:09:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:41.592 11:09:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:41.592 11:09:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.592 11:09:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.592 11:09:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.592 ************************************ 00:03:41.592 START TEST odd_alloc 00:03:41.592 ************************************ 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.592 11:09:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.127 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.127 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.127 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176707508 kB' 'MemAvailable: 179536844 kB' 'Buffers: 3896 kB' 'Cached: 9155876 kB' 'SwapCached: 0 kB' 'Active: 6218460 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831648 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545300 kB' 'Mapped: 150816 kB' 'Shmem: 5289368 kB' 'KReclaimable: 206292 kB' 'Slab: 701764 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 495472 kB' 'KernelStack: 20256 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7328020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314744 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.127 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176708224 kB' 'MemAvailable: 179537560 kB' 'Buffers: 3896 kB' 'Cached: 9155880 kB' 'SwapCached: 0 kB' 'Active: 6218196 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831384 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545536 kB' 'Mapped: 150704 kB' 'Shmem: 5289372 kB' 'KReclaimable: 206292 kB' 'Slab: 701708 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 495416 kB' 'KernelStack: 20256 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7328040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314712 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.128 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.129 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176708224 kB' 'MemAvailable: 179537560 kB' 'Buffers: 3896 kB' 'Cached: 9155880 kB' 'SwapCached: 0 kB' 'Active: 6218228 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831416 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545572 kB' 'Mapped: 150704 kB' 'Shmem: 5289372 kB' 'KReclaimable: 206292 kB' 'Slab: 701708 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 495416 kB' 'KernelStack: 20272 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7328060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314712 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.130 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.392 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:44.393 nr_hugepages=1025 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.393 resv_hugepages=0 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.393 surplus_hugepages=0 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.393 anon_hugepages=0 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176708256 kB' 'MemAvailable: 179537592 kB' 'Buffers: 3896 kB' 'Cached: 9155916 kB' 'SwapCached: 0 kB' 'Active: 6218224 kB' 'Inactive: 3483592 kB' 'Active(anon): 5831412 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545532 kB' 'Mapped: 150704 kB' 'Shmem: 5289408 kB' 'KReclaimable: 206292 kB' 'Slab: 701708 kB' 'SReclaimable: 206292 kB' 'SUnreclaim: 495416 kB' 'KernelStack: 20256 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 7328080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314712 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.393 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93903704 kB' 'MemUsed: 3711924 kB' 'SwapCached: 0 kB' 'Active: 1096852 kB' 'Inactive: 223464 kB' 'Active(anon): 917660 kB' 'Inactive(anon): 0 kB' 'Active(file): 179192 kB' 'Inactive(file): 223464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1119932 kB' 'Mapped: 50988 kB' 'AnonPages: 203652 kB' 'Shmem: 717276 kB' 'KernelStack: 10744 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76284 kB' 'Slab: 290136 kB' 'SReclaimable: 76284 kB' 'SUnreclaim: 213852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.394 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82804812 kB' 'MemUsed: 10960716 kB' 'SwapCached: 0 kB' 'Active: 5121288 kB' 'Inactive: 3260128 kB' 'Active(anon): 4913668 kB' 'Inactive(anon): 0 kB' 'Active(file): 207620 kB' 'Inactive(file): 3260128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8039924 kB' 'Mapped: 99716 kB' 'AnonPages: 341728 kB' 'Shmem: 4572176 kB' 'KernelStack: 9496 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130008 kB' 'Slab: 411572 kB' 'SReclaimable: 130008 kB' 'SUnreclaim: 281564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.395 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:44.396 node0=512 expecting 513 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:44.396 node1=513 expecting 512 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:44.396 00:03:44.396 real 0m2.922s 00:03:44.396 user 0m1.193s 00:03:44.396 sys 0m1.797s 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.396 11:09:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.396 ************************************ 00:03:44.396 END TEST odd_alloc 00:03:44.396 ************************************ 00:03:44.396 11:09:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.396 11:09:30 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:44.396 11:09:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.396 11:09:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.396 11:09:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.396 ************************************ 00:03:44.396 START TEST custom_alloc 00:03:44.396 ************************************ 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.396 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.397 11:09:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.923 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.923 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.923 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175652080 kB' 'MemAvailable: 178481408 kB' 'Buffers: 3896 kB' 'Cached: 9156032 kB' 'SwapCached: 0 kB' 'Active: 6220296 kB' 'Inactive: 3483592 kB' 'Active(anon): 5833484 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546828 kB' 'Mapped: 150692 kB' 'Shmem: 5289524 kB' 'KReclaimable: 206276 kB' 'Slab: 701648 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 495372 kB' 'KernelStack: 20288 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7328568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.188 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.189 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175652708 kB' 'MemAvailable: 178482036 kB' 'Buffers: 3896 kB' 'Cached: 9156036 kB' 'SwapCached: 0 kB' 'Active: 6219308 kB' 'Inactive: 3483592 kB' 'Active(anon): 5832496 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546304 kB' 'Mapped: 150596 kB' 'Shmem: 5289528 kB' 'KReclaimable: 206276 kB' 'Slab: 701600 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 495324 kB' 'KernelStack: 20256 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7328588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314760 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.190 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175652960 kB' 'MemAvailable: 178482288 kB' 'Buffers: 3896 kB' 'Cached: 9156048 kB' 'SwapCached: 0 kB' 'Active: 6219184 kB' 'Inactive: 3483592 kB' 'Active(anon): 5832372 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546156 kB' 'Mapped: 150596 kB' 'Shmem: 5289540 kB' 'KReclaimable: 206276 kB' 'Slab: 701600 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 495324 kB' 'KernelStack: 20240 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7328608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314760 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:47.191 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.192 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:47.193 nr_hugepages=1536 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.193 resv_hugepages=0 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.193 surplus_hugepages=0 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.193 anon_hugepages=0 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.193 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175652556 kB' 'MemAvailable: 178481884 kB' 'Buffers: 3896 kB' 'Cached: 9156072 kB' 'SwapCached: 0 kB' 'Active: 6219348 kB' 'Inactive: 3483592 kB' 'Active(anon): 5832536 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546308 kB' 'Mapped: 150596 kB' 'Shmem: 5289564 kB' 'KReclaimable: 206276 kB' 'Slab: 701600 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 495324 kB' 'KernelStack: 20256 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 7328628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314760 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.194 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93899012 kB' 'MemUsed: 3716616 kB' 'SwapCached: 0 kB' 'Active: 1097876 kB' 'Inactive: 223464 kB' 'Active(anon): 918684 kB' 'Inactive(anon): 0 kB' 'Active(file): 179192 kB' 'Inactive(file): 223464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1119968 kB' 'Mapped: 50868 kB' 'AnonPages: 204564 kB' 'Shmem: 717312 kB' 'KernelStack: 10744 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76276 kB' 'Slab: 289944 kB' 'SReclaimable: 76276 kB' 'SUnreclaim: 213668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.195 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.196 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 81754448 kB' 'MemUsed: 12011080 kB' 'SwapCached: 0 kB' 'Active: 5121512 kB' 'Inactive: 3260128 kB' 'Active(anon): 4913892 kB' 'Inactive(anon): 0 kB' 'Active(file): 207620 kB' 'Inactive(file): 3260128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8040044 kB' 'Mapped: 99728 kB' 'AnonPages: 341748 kB' 'Shmem: 4572296 kB' 'KernelStack: 9512 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130000 kB' 'Slab: 411656 kB' 'SReclaimable: 130000 kB' 'SUnreclaim: 281656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.455 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.456 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.457 node0=512 expecting 512 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:47.457 node1=1024 expecting 1024 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:47.457 00:03:47.457 real 0m2.912s 00:03:47.457 user 0m1.187s 00:03:47.457 sys 0m1.782s 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.457 11:09:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.457 ************************************ 00:03:47.457 END TEST custom_alloc 00:03:47.457 ************************************ 00:03:47.457 11:09:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.457 11:09:33 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:47.457 11:09:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.457 11:09:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.457 11:09:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.457 ************************************ 00:03:47.457 START TEST no_shrink_alloc 00:03:47.457 ************************************ 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.457 11:09:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.989 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:49.989 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.989 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176661424 kB' 'MemAvailable: 179490752 kB' 'Buffers: 3896 kB' 'Cached: 9156176 kB' 'SwapCached: 0 kB' 'Active: 6220092 kB' 'Inactive: 3483592 kB' 'Active(anon): 5833280 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546344 kB' 'Mapped: 150668 kB' 'Shmem: 5289668 kB' 'KReclaimable: 206276 kB' 'Slab: 702668 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 496392 kB' 'KernelStack: 20272 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7328812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314728 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.989 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.990 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176661820 kB' 'MemAvailable: 179491148 kB' 'Buffers: 3896 kB' 'Cached: 9156176 kB' 'SwapCached: 0 kB' 'Active: 6219920 kB' 'Inactive: 3483592 kB' 'Active(anon): 5833108 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546652 kB' 'Mapped: 150592 kB' 'Shmem: 5289668 kB' 'KReclaimable: 206276 kB' 'Slab: 702624 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 496348 kB' 'KernelStack: 20256 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7328828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314744 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.991 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.992 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176660560 kB' 'MemAvailable: 179489888 kB' 'Buffers: 3896 kB' 'Cached: 9156196 kB' 'SwapCached: 0 kB' 'Active: 6219892 kB' 'Inactive: 3483592 kB' 'Active(anon): 5833080 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546652 kB' 'Mapped: 150592 kB' 'Shmem: 5289688 kB' 'KReclaimable: 206276 kB' 'Slab: 702624 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 496348 kB' 'KernelStack: 20256 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7328852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314744 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.993 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.994 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.995 nr_hugepages=1024 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.995 resv_hugepages=0 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.995 surplus_hugepages=0 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.995 anon_hugepages=0 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176659588 kB' 'MemAvailable: 179488916 kB' 'Buffers: 3896 kB' 'Cached: 9156216 kB' 'SwapCached: 0 kB' 'Active: 6220848 kB' 'Inactive: 3483592 kB' 'Active(anon): 5834036 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547576 kB' 'Mapped: 151096 kB' 'Shmem: 5289708 kB' 'KReclaimable: 206276 kB' 'Slab: 702624 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 496348 kB' 'KernelStack: 20256 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7330756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314744 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.995 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.996 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92848824 kB' 'MemUsed: 4766804 kB' 'SwapCached: 0 kB' 'Active: 1103760 kB' 'Inactive: 223464 kB' 'Active(anon): 924568 kB' 'Inactive(anon): 0 kB' 'Active(file): 179192 kB' 'Inactive(file): 223464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1119988 kB' 'Mapped: 50848 kB' 'AnonPages: 210936 kB' 'Shmem: 717332 kB' 'KernelStack: 10792 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76276 kB' 'Slab: 290560 kB' 'SReclaimable: 76276 kB' 'SUnreclaim: 214284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.997 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.998 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.999 node0=1024 expecting 1024 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.999 11:09:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.527 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.527 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.527 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.527 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.527 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176639700 kB' 'MemAvailable: 179469028 kB' 'Buffers: 3896 kB' 'Cached: 9156312 kB' 'SwapCached: 0 kB' 'Active: 6219936 kB' 'Inactive: 3483592 kB' 'Active(anon): 5833124 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546564 kB' 'Mapped: 150636 kB' 'Shmem: 5289804 kB' 'KReclaimable: 206276 kB' 'Slab: 702420 kB' 'SReclaimable: 206276 kB' 'SUnreclaim: 496144 kB' 'KernelStack: 20192 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7329292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314792 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.790 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.791 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176640424 kB' 'MemAvailable: 179469748 kB' 'Buffers: 3896 kB' 'Cached: 9156312 kB' 'SwapCached: 0 kB' 'Active: 6220244 kB' 'Inactive: 3483592 kB' 'Active(anon): 5833432 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546932 kB' 'Mapped: 150616 kB' 'Shmem: 5289804 kB' 'KReclaimable: 206268 kB' 'Slab: 702432 kB' 'SReclaimable: 206268 kB' 'SUnreclaim: 496164 kB' 'KernelStack: 20256 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7329308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314792 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.792 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.793 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176640424 kB' 'MemAvailable: 179469748 kB' 'Buffers: 3896 kB' 'Cached: 9156344 kB' 'SwapCached: 0 kB' 'Active: 6220952 kB' 'Inactive: 3483592 kB' 'Active(anon): 5834140 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547648 kB' 'Mapped: 150616 kB' 'Shmem: 5289836 kB' 'KReclaimable: 206268 kB' 'Slab: 702432 kB' 'SReclaimable: 206268 kB' 'SUnreclaim: 496164 kB' 'KernelStack: 20256 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7329332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314792 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.794 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.795 nr_hugepages=1024 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.795 resv_hugepages=0 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.795 surplus_hugepages=0 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.795 anon_hugepages=0 00:03:52.795 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 176641360 kB' 'MemAvailable: 179470684 kB' 'Buffers: 3896 kB' 'Cached: 9156356 kB' 'SwapCached: 0 kB' 'Active: 6220384 kB' 'Inactive: 3483592 kB' 'Active(anon): 5833572 kB' 'Inactive(anon): 0 kB' 'Active(file): 386812 kB' 'Inactive(file): 3483592 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547040 kB' 'Mapped: 150616 kB' 'Shmem: 5289848 kB' 'KReclaimable: 206268 kB' 'Slab: 702432 kB' 'SReclaimable: 206268 kB' 'SUnreclaim: 496164 kB' 'KernelStack: 20256 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 7329352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 68352 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2159572 kB' 'DirectMap2M: 15345664 kB' 'DirectMap1G: 184549376 kB' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.796 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.797 11:09:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92837316 kB' 'MemUsed: 4778312 kB' 'SwapCached: 0 kB' 'Active: 1098900 kB' 'Inactive: 223464 kB' 'Active(anon): 919708 kB' 'Inactive(anon): 0 kB' 'Active(file): 179192 kB' 'Inactive(file): 223464 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1120000 kB' 'Mapped: 50868 kB' 'AnonPages: 205572 kB' 'Shmem: 717344 kB' 'KernelStack: 10776 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76268 kB' 'Slab: 290528 kB' 'SReclaimable: 76268 kB' 'SUnreclaim: 214260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.797 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.798 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.799 node0=1024 expecting 1024 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.799 00:03:52.799 real 0m5.406s 00:03:52.799 user 0m2.094s 00:03:52.799 sys 0m3.314s 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.799 11:09:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.799 ************************************ 00:03:52.799 END TEST no_shrink_alloc 00:03:52.799 ************************************ 00:03:52.799 11:09:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:52.799 11:09:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:52.799 00:03:52.799 real 0m21.177s 00:03:52.799 user 0m8.078s 00:03:52.799 sys 0m12.533s 00:03:52.799 11:09:39 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.799 11:09:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.799 ************************************ 00:03:52.799 END TEST hugepages 00:03:52.799 ************************************ 00:03:52.799 11:09:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.799 11:09:39 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:52.799 11:09:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.799 11:09:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.799 11:09:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.799 ************************************ 00:03:52.799 START TEST driver 00:03:52.799 ************************************ 00:03:52.799 11:09:39 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:53.057 * Looking for test storage... 00:03:53.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.057 11:09:39 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:53.057 11:09:39 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.057 11:09:39 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.242 11:09:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:57.242 11:09:42 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.242 11:09:42 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.242 11:09:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.242 ************************************ 00:03:57.242 START TEST guess_driver 00:03:57.242 ************************************ 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:57.242 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:57.242 Looking for driver=vfio-pci 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.242 11:09:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:59.142 11:09:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.077 11:09:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.077 11:09:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.077 11:09:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.077 11:09:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:00.077 11:09:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:00.077 11:09:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.077 11:09:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.266 00:04:04.266 real 0m7.107s 00:04:04.266 user 0m1.903s 00:04:04.266 sys 0m3.611s 00:04:04.266 11:09:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.266 11:09:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.266 ************************************ 00:04:04.266 END TEST guess_driver 00:04:04.266 ************************************ 00:04:04.266 11:09:49 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:04.266 00:04:04.266 real 0m10.760s 00:04:04.266 user 0m2.824s 00:04:04.266 sys 0m5.545s 00:04:04.266 11:09:49 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.266 11:09:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.266 ************************************ 00:04:04.266 END TEST driver 00:04:04.266 ************************************ 00:04:04.266 11:09:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:04.266 11:09:49 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:04.266 11:09:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.266 11:09:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.266 11:09:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:04.266 ************************************ 00:04:04.266 START TEST devices 00:04:04.266 ************************************ 00:04:04.266 11:09:49 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:04.266 * Looking for test storage... 00:04:04.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:04.266 11:09:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:04.266 11:09:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:04.266 11:09:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.266 11:09:50 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:07.635 11:09:53 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:07.635 No valid GPT data, bailing 00:04:07.635 11:09:53 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:07.635 11:09:53 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:07.635 11:09:53 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:07.635 11:09:53 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.635 11:09:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:07.635 ************************************ 00:04:07.635 START TEST nvme_mount 00:04:07.635 ************************************ 00:04:07.635 11:09:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:07.635 11:09:53 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:07.636 11:09:53 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:07.930 Creating new GPT entries in memory. 00:04:07.930 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:07.930 other utilities. 00:04:07.930 11:09:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:07.930 11:09:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.930 11:09:54 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.930 11:09:54 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.930 11:09:54 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:08.969 Creating new GPT entries in memory. 00:04:08.969 The operation has completed successfully. 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 708732 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:08.969 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.228 11:09:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.761 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:11.762 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.762 11:09:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:11.762 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:11.762 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:11.762 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:11.762 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:11.762 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:11.762 11:09:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:11.762 11:09:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.762 11:09:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:11.762 11:09:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.021 11:09:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.559 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.559 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:14.559 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:14.559 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.560 11:10:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:17.097 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.357 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.357 00:04:17.357 real 0m10.331s 00:04:17.357 user 0m2.974s 00:04:17.357 sys 0m5.097s 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.357 11:10:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:17.357 ************************************ 00:04:17.357 END TEST nvme_mount 00:04:17.357 ************************************ 00:04:17.357 11:10:03 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:17.357 11:10:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:17.357 11:10:03 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.357 11:10:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.357 11:10:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:17.357 ************************************ 00:04:17.357 START TEST dm_mount 00:04:17.357 ************************************ 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:17.357 11:10:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:18.736 Creating new GPT entries in memory. 00:04:18.736 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.736 other utilities. 00:04:18.736 11:10:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.736 11:10:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.736 11:10:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.736 11:10:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.736 11:10:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:19.673 Creating new GPT entries in memory. 00:04:19.673 The operation has completed successfully. 00:04:19.673 11:10:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:19.673 11:10:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.673 11:10:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.673 11:10:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.673 11:10:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:20.610 The operation has completed successfully. 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 712923 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.610 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:20.611 11:10:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.611 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.611 11:10:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.513 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.772 11:10:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.311 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:25.312 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.312 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.312 11:10:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:25.312 00:04:25.312 real 0m7.877s 00:04:25.312 user 0m1.595s 00:04:25.312 sys 0m3.026s 00:04:25.312 11:10:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.312 11:10:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:25.312 ************************************ 00:04:25.312 END TEST dm_mount 00:04:25.312 ************************************ 00:04:25.312 11:10:11 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:25.312 11:10:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:25.312 11:10:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:25.312 11:10:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.312 11:10:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.312 11:10:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.312 11:10:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.312 11:10:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.571 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:25.571 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:25.571 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.571 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.571 11:10:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:25.571 11:10:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.571 11:10:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.571 11:10:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.571 11:10:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.571 11:10:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.571 11:10:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:25.571 00:04:25.571 real 0m21.879s 00:04:25.571 user 0m5.880s 00:04:25.571 sys 0m10.357s 00:04:25.571 11:10:11 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.571 11:10:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.571 ************************************ 00:04:25.571 END TEST devices 00:04:25.571 ************************************ 00:04:25.571 11:10:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:25.571 00:04:25.571 real 1m13.577s 00:04:25.571 user 0m23.367s 00:04:25.571 sys 0m40.141s 00:04:25.571 11:10:11 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.571 11:10:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.571 ************************************ 00:04:25.571 END TEST setup.sh 00:04:25.571 ************************************ 00:04:25.571 11:10:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.571 11:10:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:28.106 Hugepages 00:04:28.106 node hugesize free / total 00:04:28.106 node0 1048576kB 0 / 0 00:04:28.106 node0 2048kB 2048 / 2048 00:04:28.106 node1 1048576kB 0 / 0 00:04:28.106 node1 2048kB 0 / 0 00:04:28.106 00:04:28.106 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.106 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:28.106 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:28.106 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:28.106 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:28.106 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:28.106 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:28.106 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:28.106 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:28.106 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:28.106 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:28.106 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:28.106 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:28.106 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:28.106 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:28.106 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:28.106 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:28.106 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:28.106 11:10:14 -- spdk/autotest.sh@130 -- # uname -s 00:04:28.106 11:10:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:28.106 11:10:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:28.106 11:10:14 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.643 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.643 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.580 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.580 11:10:17 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:32.519 11:10:18 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:32.519 11:10:18 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:32.519 11:10:18 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.519 11:10:18 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:32.519 11:10:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:32.519 11:10:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:32.519 11:10:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.519 11:10:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.519 11:10:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:32.519 11:10:18 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:32.519 11:10:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:32.519 11:10:18 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.053 Waiting for block devices as requested 00:04:35.053 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:35.311 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:35.311 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:35.569 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:35.569 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:35.569 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:35.569 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:35.828 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:35.828 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:35.828 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:35.828 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.087 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.087 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.087 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.087 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.347 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.347 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.347 11:10:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:36.347 11:10:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:36.347 11:10:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:36.347 11:10:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:36.347 11:10:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:36.347 11:10:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:36.347 11:10:22 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:36.347 11:10:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:36.347 11:10:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:36.347 11:10:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:36.347 11:10:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:36.347 11:10:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:36.347 11:10:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:36.347 11:10:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:36.347 11:10:22 -- common/autotest_common.sh@1557 -- # continue 00:04:36.347 11:10:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:36.347 11:10:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.347 11:10:22 -- common/autotest_common.sh@10 -- # set +x 00:04:36.606 11:10:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:36.606 11:10:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.606 11:10:22 -- common/autotest_common.sh@10 -- # set +x 00:04:36.606 11:10:22 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.139 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.139 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.074 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:40.074 11:10:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:40.074 11:10:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.074 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:04:40.333 11:10:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:40.333 11:10:26 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:40.333 11:10:26 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:40.333 11:10:26 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:40.333 11:10:26 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:40.333 11:10:26 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:40.333 11:10:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:40.333 11:10:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:40.333 11:10:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.333 11:10:26 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:40.333 11:10:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:40.333 11:10:26 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:40.333 11:10:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:40.333 11:10:26 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:40.333 11:10:26 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:40.333 11:10:26 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:40.333 11:10:26 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:40.333 11:10:26 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:40.333 11:10:26 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:40.333 11:10:26 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:40.333 11:10:26 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=721474 00:04:40.333 11:10:26 -- common/autotest_common.sh@1598 -- # waitforlisten 721474 00:04:40.333 11:10:26 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.333 11:10:26 -- common/autotest_common.sh@829 -- # '[' -z 721474 ']' 00:04:40.333 11:10:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.333 11:10:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.333 11:10:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.333 11:10:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.333 11:10:26 -- common/autotest_common.sh@10 -- # set +x 00:04:40.333 [2024-07-12 11:10:26.636026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:40.333 [2024-07-12 11:10:26.636122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721474 ] 00:04:40.591 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.591 [2024-07-12 11:10:26.738705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.591 [2024-07-12 11:10:26.947509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.527 11:10:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.527 11:10:27 -- common/autotest_common.sh@862 -- # return 0 00:04:41.527 11:10:27 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:41.527 11:10:27 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:41.527 11:10:27 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:44.815 nvme0n1 00:04:44.815 11:10:30 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:44.815 [2024-07-12 11:10:31.045415] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:44.815 request: 00:04:44.815 { 00:04:44.815 "nvme_ctrlr_name": "nvme0", 00:04:44.815 "password": "test", 00:04:44.815 "method": "bdev_nvme_opal_revert", 00:04:44.815 "req_id": 1 00:04:44.815 } 00:04:44.815 Got JSON-RPC error response 00:04:44.815 response: 00:04:44.815 { 00:04:44.815 "code": -32602, 00:04:44.815 "message": "Invalid parameters" 00:04:44.815 } 00:04:44.815 11:10:31 -- common/autotest_common.sh@1604 -- # true 00:04:44.815 11:10:31 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:44.815 11:10:31 -- common/autotest_common.sh@1608 -- # killprocess 721474 00:04:44.815 11:10:31 -- common/autotest_common.sh@948 -- # '[' -z 721474 ']' 00:04:44.815 11:10:31 -- common/autotest_common.sh@952 -- # kill -0 721474 00:04:44.815 11:10:31 -- common/autotest_common.sh@953 -- # uname 00:04:44.815 11:10:31 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.815 11:10:31 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 721474 00:04:44.815 11:10:31 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.815 11:10:31 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.815 11:10:31 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 721474' 00:04:44.815 killing process with pid 721474 00:04:44.815 11:10:31 -- common/autotest_common.sh@967 -- # kill 721474 00:04:44.815 11:10:31 -- common/autotest_common.sh@972 -- # wait 721474 00:04:49.003 11:10:34 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:49.003 11:10:34 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:49.003 11:10:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.003 11:10:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.003 11:10:34 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:49.003 11:10:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.003 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.003 11:10:34 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:49.003 11:10:34 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.003 11:10:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.003 11:10:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.003 11:10:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.003 ************************************ 00:04:49.003 START TEST env 00:04:49.003 ************************************ 00:04:49.003 11:10:34 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:49.003 * Looking for test storage... 00:04:49.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:49.003 11:10:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.003 11:10:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.003 11:10:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.003 11:10:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.003 ************************************ 00:04:49.003 START TEST env_memory 00:04:49.003 ************************************ 00:04:49.003 11:10:34 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:49.003 00:04:49.003 00:04:49.003 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.003 http://cunit.sourceforge.net/ 00:04:49.003 00:04:49.003 00:04:49.003 Suite: memory 00:04:49.003 Test: alloc and free memory map ...[2024-07-12 11:10:34.924566] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.003 passed 00:04:49.003 Test: mem map translation ...[2024-07-12 11:10:34.962888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.003 [2024-07-12 11:10:34.962912] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.003 [2024-07-12 11:10:34.962961] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.003 [2024-07-12 11:10:34.962975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.003 passed 00:04:49.003 Test: mem map registration ...[2024-07-12 11:10:35.023917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:49.003 [2024-07-12 11:10:35.023940] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:49.003 passed 00:04:49.003 Test: mem map adjacent registrations ...passed 00:04:49.003 00:04:49.003 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.003 suites 1 1 n/a 0 0 00:04:49.003 tests 4 4 4 0 0 00:04:49.003 asserts 152 152 152 0 n/a 00:04:49.003 00:04:49.003 Elapsed time = 0.224 seconds 00:04:49.003 00:04:49.003 real 0m0.257s 00:04:49.003 user 0m0.235s 00:04:49.003 sys 0m0.022s 00:04:49.004 11:10:35 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.004 11:10:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.004 ************************************ 00:04:49.004 END TEST env_memory 00:04:49.004 ************************************ 00:04:49.004 11:10:35 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.004 11:10:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.004 11:10:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.004 11:10:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.004 11:10:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.004 ************************************ 00:04:49.004 START TEST env_vtophys 00:04:49.004 ************************************ 00:04:49.004 11:10:35 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.004 EAL: lib.eal log level changed from notice to debug 00:04:49.004 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.004 EAL: Detected lcore 1 as core 1 on socket 0 00:04:49.004 EAL: Detected lcore 2 as core 2 on socket 0 00:04:49.004 EAL: Detected lcore 3 as core 3 on socket 0 00:04:49.004 EAL: Detected lcore 4 as core 4 on socket 0 00:04:49.004 EAL: Detected lcore 5 as core 5 on socket 0 00:04:49.004 EAL: Detected lcore 6 as core 6 on socket 0 00:04:49.004 EAL: Detected lcore 7 as core 8 on socket 0 00:04:49.004 EAL: Detected lcore 8 as core 9 on socket 0 00:04:49.004 EAL: Detected lcore 9 as core 10 on socket 0 00:04:49.004 EAL: Detected lcore 10 as core 11 on socket 0 00:04:49.004 EAL: Detected lcore 11 as core 12 on socket 0 00:04:49.004 EAL: Detected lcore 12 as core 13 on socket 0 00:04:49.004 EAL: Detected lcore 13 as core 16 on socket 0 00:04:49.004 EAL: Detected lcore 14 as core 17 on socket 0 00:04:49.004 EAL: Detected lcore 15 as core 18 on socket 0 00:04:49.004 EAL: Detected lcore 16 as core 19 on socket 0 00:04:49.004 EAL: Detected lcore 17 as core 20 on socket 0 00:04:49.004 EAL: Detected lcore 18 as core 21 on socket 0 00:04:49.004 EAL: Detected lcore 19 as core 25 on socket 0 00:04:49.004 EAL: Detected lcore 20 as core 26 on socket 0 00:04:49.004 EAL: Detected lcore 21 as core 27 on socket 0 00:04:49.004 EAL: Detected lcore 22 as core 28 on socket 0 00:04:49.004 EAL: Detected lcore 23 as core 29 on socket 0 00:04:49.004 EAL: Detected lcore 24 as core 0 on socket 1 00:04:49.004 EAL: Detected lcore 25 as core 1 on socket 1 00:04:49.004 EAL: Detected lcore 26 as core 2 on socket 1 00:04:49.004 EAL: Detected lcore 27 as core 3 on socket 1 00:04:49.004 EAL: Detected lcore 28 as core 4 on socket 1 00:04:49.004 EAL: Detected lcore 29 as core 5 on socket 1 00:04:49.004 EAL: Detected lcore 30 as core 6 on socket 1 00:04:49.004 EAL: Detected lcore 31 as core 9 on socket 1 00:04:49.004 EAL: Detected lcore 32 as core 10 on socket 1 00:04:49.004 EAL: Detected lcore 33 as core 11 on socket 1 00:04:49.004 EAL: Detected lcore 34 as core 12 on socket 1 00:04:49.004 EAL: Detected lcore 35 as core 13 on socket 1 00:04:49.004 EAL: Detected lcore 36 as core 16 on socket 1 00:04:49.004 EAL: Detected lcore 37 as core 17 on socket 1 00:04:49.004 EAL: Detected lcore 38 as core 18 on socket 1 00:04:49.004 EAL: Detected lcore 39 as core 19 on socket 1 00:04:49.004 EAL: Detected lcore 40 as core 20 on socket 1 00:04:49.004 EAL: Detected lcore 41 as core 21 on socket 1 00:04:49.004 EAL: Detected lcore 42 as core 24 on socket 1 00:04:49.004 EAL: Detected lcore 43 as core 25 on socket 1 00:04:49.004 EAL: Detected lcore 44 as core 26 on socket 1 00:04:49.004 EAL: Detected lcore 45 as core 27 on socket 1 00:04:49.004 EAL: Detected lcore 46 as core 28 on socket 1 00:04:49.004 EAL: Detected lcore 47 as core 29 on socket 1 00:04:49.004 EAL: Detected lcore 48 as core 0 on socket 0 00:04:49.004 EAL: Detected lcore 49 as core 1 on socket 0 00:04:49.004 EAL: Detected lcore 50 as core 2 on socket 0 00:04:49.004 EAL: Detected lcore 51 as core 3 on socket 0 00:04:49.004 EAL: Detected lcore 52 as core 4 on socket 0 00:04:49.004 EAL: Detected lcore 53 as core 5 on socket 0 00:04:49.004 EAL: Detected lcore 54 as core 6 on socket 0 00:04:49.004 EAL: Detected lcore 55 as core 8 on socket 0 00:04:49.004 EAL: Detected lcore 56 as core 9 on socket 0 00:04:49.004 EAL: Detected lcore 57 as core 10 on socket 0 00:04:49.004 EAL: Detected lcore 58 as core 11 on socket 0 00:04:49.004 EAL: Detected lcore 59 as core 12 on socket 0 00:04:49.004 EAL: Detected lcore 60 as core 13 on socket 0 00:04:49.004 EAL: Detected lcore 61 as core 16 on socket 0 00:04:49.004 EAL: Detected lcore 62 as core 17 on socket 0 00:04:49.004 EAL: Detected lcore 63 as core 18 on socket 0 00:04:49.004 EAL: Detected lcore 64 as core 19 on socket 0 00:04:49.004 EAL: Detected lcore 65 as core 20 on socket 0 00:04:49.004 EAL: Detected lcore 66 as core 21 on socket 0 00:04:49.004 EAL: Detected lcore 67 as core 25 on socket 0 00:04:49.004 EAL: Detected lcore 68 as core 26 on socket 0 00:04:49.004 EAL: Detected lcore 69 as core 27 on socket 0 00:04:49.004 EAL: Detected lcore 70 as core 28 on socket 0 00:04:49.004 EAL: Detected lcore 71 as core 29 on socket 0 00:04:49.004 EAL: Detected lcore 72 as core 0 on socket 1 00:04:49.004 EAL: Detected lcore 73 as core 1 on socket 1 00:04:49.004 EAL: Detected lcore 74 as core 2 on socket 1 00:04:49.004 EAL: Detected lcore 75 as core 3 on socket 1 00:04:49.004 EAL: Detected lcore 76 as core 4 on socket 1 00:04:49.004 EAL: Detected lcore 77 as core 5 on socket 1 00:04:49.004 EAL: Detected lcore 78 as core 6 on socket 1 00:04:49.004 EAL: Detected lcore 79 as core 9 on socket 1 00:04:49.004 EAL: Detected lcore 80 as core 10 on socket 1 00:04:49.004 EAL: Detected lcore 81 as core 11 on socket 1 00:04:49.004 EAL: Detected lcore 82 as core 12 on socket 1 00:04:49.004 EAL: Detected lcore 83 as core 13 on socket 1 00:04:49.004 EAL: Detected lcore 84 as core 16 on socket 1 00:04:49.004 EAL: Detected lcore 85 as core 17 on socket 1 00:04:49.004 EAL: Detected lcore 86 as core 18 on socket 1 00:04:49.004 EAL: Detected lcore 87 as core 19 on socket 1 00:04:49.004 EAL: Detected lcore 88 as core 20 on socket 1 00:04:49.004 EAL: Detected lcore 89 as core 21 on socket 1 00:04:49.004 EAL: Detected lcore 90 as core 24 on socket 1 00:04:49.004 EAL: Detected lcore 91 as core 25 on socket 1 00:04:49.004 EAL: Detected lcore 92 as core 26 on socket 1 00:04:49.004 EAL: Detected lcore 93 as core 27 on socket 1 00:04:49.004 EAL: Detected lcore 94 as core 28 on socket 1 00:04:49.004 EAL: Detected lcore 95 as core 29 on socket 1 00:04:49.004 EAL: Maximum logical cores by configuration: 128 00:04:49.004 EAL: Detected CPU lcores: 96 00:04:49.004 EAL: Detected NUMA nodes: 2 00:04:49.004 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:49.004 EAL: Detected shared linkage of DPDK 00:04:49.004 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.004 EAL: Bus pci wants IOVA as 'DC' 00:04:49.004 EAL: Buses did not request a specific IOVA mode. 00:04:49.004 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:49.004 EAL: Selected IOVA mode 'VA' 00:04:49.004 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.004 EAL: Probing VFIO support... 00:04:49.004 EAL: IOMMU type 1 (Type 1) is supported 00:04:49.004 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:49.004 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:49.004 EAL: VFIO support initialized 00:04:49.004 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.004 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.004 EAL: Setting up physically contiguous memory... 00:04:49.004 EAL: Setting maximum number of open files to 524288 00:04:49.004 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.004 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:49.004 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.004 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:49.004 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.004 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:49.004 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.004 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.004 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:49.004 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:49.004 EAL: Hugepages will be freed exactly as allocated. 00:04:49.004 EAL: No shared files mode enabled, IPC is disabled 00:04:49.004 EAL: No shared files mode enabled, IPC is disabled 00:04:49.004 EAL: TSC frequency is ~2300000 KHz 00:04:49.004 EAL: Main lcore 0 is ready (tid=7f294599aa40;cpuset=[0]) 00:04:49.004 EAL: Trying to obtain current memory policy. 00:04:49.004 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.004 EAL: Restoring previous memory policy: 0 00:04:49.004 EAL: request: mp_malloc_sync 00:04:49.004 EAL: No shared files mode enabled, IPC is disabled 00:04:49.004 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.005 EAL: No shared files mode enabled, IPC is disabled 00:04:49.005 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.005 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.005 00:04:49.005 00:04:49.005 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.005 http://cunit.sourceforge.net/ 00:04:49.005 00:04:49.005 00:04:49.005 Suite: components_suite 00:04:49.264 Test: vtophys_malloc_test ...passed 00:04:49.264 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.264 EAL: Restoring previous memory policy: 4 00:04:49.264 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.264 EAL: request: mp_malloc_sync 00:04:49.264 EAL: No shared files mode enabled, IPC is disabled 00:04:49.264 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.523 EAL: Trying to obtain current memory policy. 00:04:49.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.523 EAL: Restoring previous memory policy: 4 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.523 EAL: Trying to obtain current memory policy. 00:04:49.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.523 EAL: Restoring previous memory policy: 4 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.523 EAL: Trying to obtain current memory policy. 00:04:49.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.523 EAL: Restoring previous memory policy: 4 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.523 EAL: Trying to obtain current memory policy. 00:04:49.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.523 EAL: Restoring previous memory policy: 4 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.523 EAL: Trying to obtain current memory policy. 00:04:49.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.523 EAL: Restoring previous memory policy: 4 00:04:49.523 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.523 EAL: request: mp_malloc_sync 00:04:49.523 EAL: No shared files mode enabled, IPC is disabled 00:04:49.523 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.781 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.781 EAL: request: mp_malloc_sync 00:04:49.781 EAL: No shared files mode enabled, IPC is disabled 00:04:49.781 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.781 EAL: Trying to obtain current memory policy. 00:04:49.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.039 EAL: Restoring previous memory policy: 4 00:04:50.039 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.040 EAL: request: mp_malloc_sync 00:04:50.040 EAL: No shared files mode enabled, IPC is disabled 00:04:50.040 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.298 EAL: request: mp_malloc_sync 00:04:50.298 EAL: No shared files mode enabled, IPC is disabled 00:04:50.298 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.298 EAL: Trying to obtain current memory policy. 00:04:50.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.557 EAL: Restoring previous memory policy: 4 00:04:50.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.557 EAL: request: mp_malloc_sync 00:04:50.557 EAL: No shared files mode enabled, IPC is disabled 00:04:50.557 EAL: Heap on socket 0 was expanded by 258MB 00:04:51.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.125 EAL: request: mp_malloc_sync 00:04:51.125 EAL: No shared files mode enabled, IPC is disabled 00:04:51.125 EAL: Heap on socket 0 was shrunk by 258MB 00:04:51.383 EAL: Trying to obtain current memory policy. 00:04:51.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.642 EAL: Restoring previous memory policy: 4 00:04:51.642 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.642 EAL: request: mp_malloc_sync 00:04:51.642 EAL: No shared files mode enabled, IPC is disabled 00:04:51.642 EAL: Heap on socket 0 was expanded by 514MB 00:04:52.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.578 EAL: request: mp_malloc_sync 00:04:52.578 EAL: No shared files mode enabled, IPC is disabled 00:04:52.578 EAL: Heap on socket 0 was shrunk by 514MB 00:04:53.513 EAL: Trying to obtain current memory policy. 00:04:53.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.513 EAL: Restoring previous memory policy: 4 00:04:53.513 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.513 EAL: request: mp_malloc_sync 00:04:53.513 EAL: No shared files mode enabled, IPC is disabled 00:04:53.513 EAL: Heap on socket 0 was expanded by 1026MB 00:04:56.048 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.048 EAL: request: mp_malloc_sync 00:04:56.048 EAL: No shared files mode enabled, IPC is disabled 00:04:56.048 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:57.423 passed 00:04:57.423 00:04:57.423 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.423 suites 1 1 n/a 0 0 00:04:57.423 tests 2 2 2 0 0 00:04:57.423 asserts 497 497 497 0 n/a 00:04:57.423 00:04:57.423 Elapsed time = 8.279 seconds 00:04:57.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.423 EAL: request: mp_malloc_sync 00:04:57.423 EAL: No shared files mode enabled, IPC is disabled 00:04:57.423 EAL: Heap on socket 0 was shrunk by 2MB 00:04:57.423 EAL: No shared files mode enabled, IPC is disabled 00:04:57.423 EAL: No shared files mode enabled, IPC is disabled 00:04:57.423 EAL: No shared files mode enabled, IPC is disabled 00:04:57.423 00:04:57.423 real 0m8.506s 00:04:57.423 user 0m7.709s 00:04:57.423 sys 0m0.744s 00:04:57.423 11:10:43 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.423 11:10:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:57.423 ************************************ 00:04:57.423 END TEST env_vtophys 00:04:57.423 ************************************ 00:04:57.423 11:10:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:57.423 11:10:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:57.423 11:10:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.423 11:10:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.423 11:10:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.423 ************************************ 00:04:57.423 START TEST env_pci 00:04:57.423 ************************************ 00:04:57.423 11:10:43 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:57.682 00:04:57.682 00:04:57.682 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.682 http://cunit.sourceforge.net/ 00:04:57.682 00:04:57.682 00:04:57.682 Suite: pci 00:04:57.682 Test: pci_hook ...[2024-07-12 11:10:43.788882] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 724402 has claimed it 00:04:57.682 EAL: Cannot find device (10000:00:01.0) 00:04:57.682 EAL: Failed to attach device on primary process 00:04:57.682 passed 00:04:57.682 00:04:57.682 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.682 suites 1 1 n/a 0 0 00:04:57.682 tests 1 1 1 0 0 00:04:57.682 asserts 25 25 25 0 n/a 00:04:57.682 00:04:57.682 Elapsed time = 0.045 seconds 00:04:57.682 00:04:57.682 real 0m0.121s 00:04:57.682 user 0m0.061s 00:04:57.682 sys 0m0.060s 00:04:57.682 11:10:43 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.682 11:10:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:57.682 ************************************ 00:04:57.682 END TEST env_pci 00:04:57.682 ************************************ 00:04:57.682 11:10:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:57.682 11:10:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:57.682 11:10:43 env -- env/env.sh@15 -- # uname 00:04:57.682 11:10:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:57.682 11:10:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:57.682 11:10:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.682 11:10:43 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:57.682 11:10:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.682 11:10:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.682 ************************************ 00:04:57.682 START TEST env_dpdk_post_init 00:04:57.682 ************************************ 00:04:57.682 11:10:43 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.682 EAL: Detected CPU lcores: 96 00:04:57.682 EAL: Detected NUMA nodes: 2 00:04:57.682 EAL: Detected shared linkage of DPDK 00:04:57.682 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.682 EAL: Selected IOVA mode 'VA' 00:04:57.682 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.682 EAL: VFIO support initialized 00:04:57.940 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.940 EAL: Using IOMMU type 1 (Type 1) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:57.940 EAL: Ignore mapping IO port bar(1) 00:04:57.940 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:58.876 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:58.876 EAL: Ignore mapping IO port bar(1) 00:04:58.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:58.876 EAL: Ignore mapping IO port bar(1) 00:04:58.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:58.876 EAL: Ignore mapping IO port bar(1) 00:04:58.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:58.876 EAL: Ignore mapping IO port bar(1) 00:04:58.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:58.876 EAL: Ignore mapping IO port bar(1) 00:04:58.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:58.876 EAL: Ignore mapping IO port bar(1) 00:04:58.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:58.876 EAL: Ignore mapping IO port bar(1) 00:04:58.876 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:58.877 EAL: Ignore mapping IO port bar(1) 00:04:58.877 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:02.166 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:02.166 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:02.166 Starting DPDK initialization... 00:05:02.166 Starting SPDK post initialization... 00:05:02.166 SPDK NVMe probe 00:05:02.166 Attaching to 0000:5e:00.0 00:05:02.166 Attached to 0000:5e:00.0 00:05:02.166 Cleaning up... 00:05:02.166 00:05:02.166 real 0m4.497s 00:05:02.166 user 0m3.391s 00:05:02.166 sys 0m0.171s 00:05:02.167 11:10:48 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.167 11:10:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.167 ************************************ 00:05:02.167 END TEST env_dpdk_post_init 00:05:02.167 ************************************ 00:05:02.167 11:10:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.167 11:10:48 env -- env/env.sh@26 -- # uname 00:05:02.167 11:10:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:02.167 11:10:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.167 11:10:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.167 11:10:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.167 11:10:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.167 ************************************ 00:05:02.167 START TEST env_mem_callbacks 00:05:02.167 ************************************ 00:05:02.167 11:10:48 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:02.425 EAL: Detected CPU lcores: 96 00:05:02.425 EAL: Detected NUMA nodes: 2 00:05:02.425 EAL: Detected shared linkage of DPDK 00:05:02.425 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:02.425 EAL: Selected IOVA mode 'VA' 00:05:02.425 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.425 EAL: VFIO support initialized 00:05:02.425 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:02.425 00:05:02.425 00:05:02.425 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.425 http://cunit.sourceforge.net/ 00:05:02.425 00:05:02.425 00:05:02.425 Suite: memory 00:05:02.425 Test: test ... 00:05:02.425 register 0x200000200000 2097152 00:05:02.425 malloc 3145728 00:05:02.425 register 0x200000400000 4194304 00:05:02.425 buf 0x2000004fffc0 len 3145728 PASSED 00:05:02.425 malloc 64 00:05:02.425 buf 0x2000004ffec0 len 64 PASSED 00:05:02.425 malloc 4194304 00:05:02.425 register 0x200000800000 6291456 00:05:02.425 buf 0x2000009fffc0 len 4194304 PASSED 00:05:02.425 free 0x2000004fffc0 3145728 00:05:02.425 free 0x2000004ffec0 64 00:05:02.425 unregister 0x200000400000 4194304 PASSED 00:05:02.425 free 0x2000009fffc0 4194304 00:05:02.425 unregister 0x200000800000 6291456 PASSED 00:05:02.425 malloc 8388608 00:05:02.425 register 0x200000400000 10485760 00:05:02.425 buf 0x2000005fffc0 len 8388608 PASSED 00:05:02.425 free 0x2000005fffc0 8388608 00:05:02.425 unregister 0x200000400000 10485760 PASSED 00:05:02.425 passed 00:05:02.425 00:05:02.425 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.425 suites 1 1 n/a 0 0 00:05:02.425 tests 1 1 1 0 0 00:05:02.425 asserts 15 15 15 0 n/a 00:05:02.425 00:05:02.425 Elapsed time = 0.069 seconds 00:05:02.425 00:05:02.425 real 0m0.169s 00:05:02.425 user 0m0.100s 00:05:02.425 sys 0m0.069s 00:05:02.425 11:10:48 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.425 11:10:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:02.425 ************************************ 00:05:02.425 END TEST env_mem_callbacks 00:05:02.425 ************************************ 00:05:02.425 11:10:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.425 00:05:02.425 real 0m13.959s 00:05:02.425 user 0m11.651s 00:05:02.425 sys 0m1.348s 00:05:02.425 11:10:48 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.425 11:10:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.425 ************************************ 00:05:02.425 END TEST env 00:05:02.425 ************************************ 00:05:02.425 11:10:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.425 11:10:48 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.425 11:10:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.425 11:10:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.425 11:10:48 -- common/autotest_common.sh@10 -- # set +x 00:05:02.425 ************************************ 00:05:02.425 START TEST rpc 00:05:02.425 ************************************ 00:05:02.426 11:10:48 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.685 * Looking for test storage... 00:05:02.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.685 11:10:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=725447 00:05:02.685 11:10:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:02.685 11:10:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.685 11:10:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 725447 00:05:02.685 11:10:48 rpc -- common/autotest_common.sh@829 -- # '[' -z 725447 ']' 00:05:02.685 11:10:48 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.685 11:10:48 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.685 11:10:48 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.685 11:10:48 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.685 11:10:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.685 [2024-07-12 11:10:48.935253] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:02.685 [2024-07-12 11:10:48.935348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725447 ] 00:05:02.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.685 [2024-07-12 11:10:49.038978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.944 [2024-07-12 11:10:49.245825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.944 [2024-07-12 11:10:49.245870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 725447' to capture a snapshot of events at runtime. 00:05:02.944 [2024-07-12 11:10:49.245880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.944 [2024-07-12 11:10:49.245892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.944 [2024-07-12 11:10:49.245899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid725447 for offline analysis/debug. 00:05:02.944 [2024-07-12 11:10:49.245928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.880 11:10:50 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.880 11:10:50 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:03.880 11:10:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.880 11:10:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.880 11:10:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.880 11:10:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.880 11:10:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.880 11:10:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.880 11:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.880 ************************************ 00:05:03.880 START TEST rpc_integrity 00:05:03.880 ************************************ 00:05:03.880 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:03.880 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.880 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.880 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.880 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.880 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.880 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.139 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.139 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.139 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.139 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.139 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.139 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:04.139 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.139 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.139 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.139 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.139 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.139 { 00:05:04.139 "name": "Malloc0", 00:05:04.139 "aliases": [ 00:05:04.139 "343cd0b6-0bed-4616-abfc-7745fd090aed" 00:05:04.140 ], 00:05:04.140 "product_name": "Malloc disk", 00:05:04.140 "block_size": 512, 00:05:04.140 "num_blocks": 16384, 00:05:04.140 "uuid": "343cd0b6-0bed-4616-abfc-7745fd090aed", 00:05:04.140 "assigned_rate_limits": { 00:05:04.140 "rw_ios_per_sec": 0, 00:05:04.140 "rw_mbytes_per_sec": 0, 00:05:04.140 "r_mbytes_per_sec": 0, 00:05:04.140 "w_mbytes_per_sec": 0 00:05:04.140 }, 00:05:04.140 "claimed": false, 00:05:04.140 "zoned": false, 00:05:04.140 "supported_io_types": { 00:05:04.140 "read": true, 00:05:04.140 "write": true, 00:05:04.140 "unmap": true, 00:05:04.140 "flush": true, 00:05:04.140 "reset": true, 00:05:04.140 "nvme_admin": false, 00:05:04.140 "nvme_io": false, 00:05:04.140 "nvme_io_md": false, 00:05:04.140 "write_zeroes": true, 00:05:04.140 "zcopy": true, 00:05:04.140 "get_zone_info": false, 00:05:04.140 "zone_management": false, 00:05:04.140 "zone_append": false, 00:05:04.140 "compare": false, 00:05:04.140 "compare_and_write": false, 00:05:04.140 "abort": true, 00:05:04.140 "seek_hole": false, 00:05:04.140 "seek_data": false, 00:05:04.140 "copy": true, 00:05:04.140 "nvme_iov_md": false 00:05:04.140 }, 00:05:04.140 "memory_domains": [ 00:05:04.140 { 00:05:04.140 "dma_device_id": "system", 00:05:04.140 "dma_device_type": 1 00:05:04.140 }, 00:05:04.140 { 00:05:04.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.140 "dma_device_type": 2 00:05:04.140 } 00:05:04.140 ], 00:05:04.140 "driver_specific": {} 00:05:04.140 } 00:05:04.140 ]' 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.140 [2024-07-12 11:10:50.331296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:04.140 [2024-07-12 11:10:50.331349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.140 [2024-07-12 11:10:50.331370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:05:04.140 [2024-07-12 11:10:50.331389] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.140 [2024-07-12 11:10:50.333286] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.140 [2024-07-12 11:10:50.333323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.140 Passthru0 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.140 { 00:05:04.140 "name": "Malloc0", 00:05:04.140 "aliases": [ 00:05:04.140 "343cd0b6-0bed-4616-abfc-7745fd090aed" 00:05:04.140 ], 00:05:04.140 "product_name": "Malloc disk", 00:05:04.140 "block_size": 512, 00:05:04.140 "num_blocks": 16384, 00:05:04.140 "uuid": "343cd0b6-0bed-4616-abfc-7745fd090aed", 00:05:04.140 "assigned_rate_limits": { 00:05:04.140 "rw_ios_per_sec": 0, 00:05:04.140 "rw_mbytes_per_sec": 0, 00:05:04.140 "r_mbytes_per_sec": 0, 00:05:04.140 "w_mbytes_per_sec": 0 00:05:04.140 }, 00:05:04.140 "claimed": true, 00:05:04.140 "claim_type": "exclusive_write", 00:05:04.140 "zoned": false, 00:05:04.140 "supported_io_types": { 00:05:04.140 "read": true, 00:05:04.140 "write": true, 00:05:04.140 "unmap": true, 00:05:04.140 "flush": true, 00:05:04.140 "reset": true, 00:05:04.140 "nvme_admin": false, 00:05:04.140 "nvme_io": false, 00:05:04.140 "nvme_io_md": false, 00:05:04.140 "write_zeroes": true, 00:05:04.140 "zcopy": true, 00:05:04.140 "get_zone_info": false, 00:05:04.140 "zone_management": false, 00:05:04.140 "zone_append": false, 00:05:04.140 "compare": false, 00:05:04.140 "compare_and_write": false, 00:05:04.140 "abort": true, 00:05:04.140 "seek_hole": false, 00:05:04.140 "seek_data": false, 00:05:04.140 "copy": true, 00:05:04.140 "nvme_iov_md": false 00:05:04.140 }, 00:05:04.140 "memory_domains": [ 00:05:04.140 { 00:05:04.140 "dma_device_id": "system", 00:05:04.140 "dma_device_type": 1 00:05:04.140 }, 00:05:04.140 { 00:05:04.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.140 "dma_device_type": 2 00:05:04.140 } 00:05:04.140 ], 00:05:04.140 "driver_specific": {} 00:05:04.140 }, 00:05:04.140 { 00:05:04.140 "name": "Passthru0", 00:05:04.140 "aliases": [ 00:05:04.140 "103fa6c7-06fb-5667-98f8-3891afa453ec" 00:05:04.140 ], 00:05:04.140 "product_name": "passthru", 00:05:04.140 "block_size": 512, 00:05:04.140 "num_blocks": 16384, 00:05:04.140 "uuid": "103fa6c7-06fb-5667-98f8-3891afa453ec", 00:05:04.140 "assigned_rate_limits": { 00:05:04.140 "rw_ios_per_sec": 0, 00:05:04.140 "rw_mbytes_per_sec": 0, 00:05:04.140 "r_mbytes_per_sec": 0, 00:05:04.140 "w_mbytes_per_sec": 0 00:05:04.140 }, 00:05:04.140 "claimed": false, 00:05:04.140 "zoned": false, 00:05:04.140 "supported_io_types": { 00:05:04.140 "read": true, 00:05:04.140 "write": true, 00:05:04.140 "unmap": true, 00:05:04.140 "flush": true, 00:05:04.140 "reset": true, 00:05:04.140 "nvme_admin": false, 00:05:04.140 "nvme_io": false, 00:05:04.140 "nvme_io_md": false, 00:05:04.140 "write_zeroes": true, 00:05:04.140 "zcopy": true, 00:05:04.140 "get_zone_info": false, 00:05:04.140 "zone_management": false, 00:05:04.140 "zone_append": false, 00:05:04.140 "compare": false, 00:05:04.140 "compare_and_write": false, 00:05:04.140 "abort": true, 00:05:04.140 "seek_hole": false, 00:05:04.140 "seek_data": false, 00:05:04.140 "copy": true, 00:05:04.140 "nvme_iov_md": false 00:05:04.140 }, 00:05:04.140 "memory_domains": [ 00:05:04.140 { 00:05:04.140 "dma_device_id": "system", 00:05:04.140 "dma_device_type": 1 00:05:04.140 }, 00:05:04.140 { 00:05:04.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.140 "dma_device_type": 2 00:05:04.140 } 00:05:04.140 ], 00:05:04.140 "driver_specific": { 00:05:04.140 "passthru": { 00:05:04.140 "name": "Passthru0", 00:05:04.140 "base_bdev_name": "Malloc0" 00:05:04.140 } 00:05:04.140 } 00:05:04.140 } 00:05:04.140 ]' 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.140 11:10:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.140 00:05:04.140 real 0m0.301s 00:05:04.140 user 0m0.173s 00:05:04.140 sys 0m0.032s 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.140 11:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.140 ************************************ 00:05:04.140 END TEST rpc_integrity 00:05:04.140 ************************************ 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.400 11:10:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.400 ************************************ 00:05:04.400 START TEST rpc_plugins 00:05:04.400 ************************************ 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:04.400 { 00:05:04.400 "name": "Malloc1", 00:05:04.400 "aliases": [ 00:05:04.400 "7ae83e67-8a78-4ed6-a38f-ef5ff08c7fed" 00:05:04.400 ], 00:05:04.400 "product_name": "Malloc disk", 00:05:04.400 "block_size": 4096, 00:05:04.400 "num_blocks": 256, 00:05:04.400 "uuid": "7ae83e67-8a78-4ed6-a38f-ef5ff08c7fed", 00:05:04.400 "assigned_rate_limits": { 00:05:04.400 "rw_ios_per_sec": 0, 00:05:04.400 "rw_mbytes_per_sec": 0, 00:05:04.400 "r_mbytes_per_sec": 0, 00:05:04.400 "w_mbytes_per_sec": 0 00:05:04.400 }, 00:05:04.400 "claimed": false, 00:05:04.400 "zoned": false, 00:05:04.400 "supported_io_types": { 00:05:04.400 "read": true, 00:05:04.400 "write": true, 00:05:04.400 "unmap": true, 00:05:04.400 "flush": true, 00:05:04.400 "reset": true, 00:05:04.400 "nvme_admin": false, 00:05:04.400 "nvme_io": false, 00:05:04.400 "nvme_io_md": false, 00:05:04.400 "write_zeroes": true, 00:05:04.400 "zcopy": true, 00:05:04.400 "get_zone_info": false, 00:05:04.400 "zone_management": false, 00:05:04.400 "zone_append": false, 00:05:04.400 "compare": false, 00:05:04.400 "compare_and_write": false, 00:05:04.400 "abort": true, 00:05:04.400 "seek_hole": false, 00:05:04.400 "seek_data": false, 00:05:04.400 "copy": true, 00:05:04.400 "nvme_iov_md": false 00:05:04.400 }, 00:05:04.400 "memory_domains": [ 00:05:04.400 { 00:05:04.400 "dma_device_id": "system", 00:05:04.400 "dma_device_type": 1 00:05:04.400 }, 00:05:04.400 { 00:05:04.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.400 "dma_device_type": 2 00:05:04.400 } 00:05:04.400 ], 00:05:04.400 "driver_specific": {} 00:05:04.400 } 00:05:04.400 ]' 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:04.400 11:10:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:04.400 00:05:04.400 real 0m0.138s 00:05:04.400 user 0m0.083s 00:05:04.400 sys 0m0.014s 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.400 11:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.400 ************************************ 00:05:04.400 END TEST rpc_plugins 00:05:04.400 ************************************ 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.400 11:10:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.400 11:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.400 ************************************ 00:05:04.400 START TEST rpc_trace_cmd_test 00:05:04.400 ************************************ 00:05:04.400 11:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:04.400 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:04.659 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid725447", 00:05:04.659 "tpoint_group_mask": "0x8", 00:05:04.659 "iscsi_conn": { 00:05:04.659 "mask": "0x2", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "scsi": { 00:05:04.659 "mask": "0x4", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "bdev": { 00:05:04.659 "mask": "0x8", 00:05:04.659 "tpoint_mask": "0xffffffffffffffff" 00:05:04.659 }, 00:05:04.659 "nvmf_rdma": { 00:05:04.659 "mask": "0x10", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "nvmf_tcp": { 00:05:04.659 "mask": "0x20", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "ftl": { 00:05:04.659 "mask": "0x40", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "blobfs": { 00:05:04.659 "mask": "0x80", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "dsa": { 00:05:04.659 "mask": "0x200", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "thread": { 00:05:04.659 "mask": "0x400", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "nvme_pcie": { 00:05:04.659 "mask": "0x800", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "iaa": { 00:05:04.659 "mask": "0x1000", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "nvme_tcp": { 00:05:04.659 "mask": "0x2000", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "bdev_nvme": { 00:05:04.659 "mask": "0x4000", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 }, 00:05:04.659 "sock": { 00:05:04.659 "mask": "0x8000", 00:05:04.659 "tpoint_mask": "0x0" 00:05:04.659 } 00:05:04.659 }' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:04.659 00:05:04.659 real 0m0.209s 00:05:04.659 user 0m0.181s 00:05:04.659 sys 0m0.020s 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.659 11:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:04.659 ************************************ 00:05:04.659 END TEST rpc_trace_cmd_test 00:05:04.659 ************************************ 00:05:04.659 11:10:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.659 11:10:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:04.659 11:10:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:04.659 11:10:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:04.659 11:10:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.659 11:10:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.659 11:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.918 ************************************ 00:05:04.918 START TEST rpc_daemon_integrity 00:05:04.918 ************************************ 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.918 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.918 { 00:05:04.918 "name": "Malloc2", 00:05:04.918 "aliases": [ 00:05:04.918 "b82fecfb-2c12-4892-bc4b-cbbefe9172a3" 00:05:04.918 ], 00:05:04.918 "product_name": "Malloc disk", 00:05:04.918 "block_size": 512, 00:05:04.918 "num_blocks": 16384, 00:05:04.918 "uuid": "b82fecfb-2c12-4892-bc4b-cbbefe9172a3", 00:05:04.918 "assigned_rate_limits": { 00:05:04.918 "rw_ios_per_sec": 0, 00:05:04.918 "rw_mbytes_per_sec": 0, 00:05:04.918 "r_mbytes_per_sec": 0, 00:05:04.918 "w_mbytes_per_sec": 0 00:05:04.918 }, 00:05:04.918 "claimed": false, 00:05:04.918 "zoned": false, 00:05:04.918 "supported_io_types": { 00:05:04.918 "read": true, 00:05:04.918 "write": true, 00:05:04.918 "unmap": true, 00:05:04.918 "flush": true, 00:05:04.918 "reset": true, 00:05:04.918 "nvme_admin": false, 00:05:04.918 "nvme_io": false, 00:05:04.918 "nvme_io_md": false, 00:05:04.918 "write_zeroes": true, 00:05:04.918 "zcopy": true, 00:05:04.918 "get_zone_info": false, 00:05:04.918 "zone_management": false, 00:05:04.918 "zone_append": false, 00:05:04.919 "compare": false, 00:05:04.919 "compare_and_write": false, 00:05:04.919 "abort": true, 00:05:04.919 "seek_hole": false, 00:05:04.919 "seek_data": false, 00:05:04.919 "copy": true, 00:05:04.919 "nvme_iov_md": false 00:05:04.919 }, 00:05:04.919 "memory_domains": [ 00:05:04.919 { 00:05:04.919 "dma_device_id": "system", 00:05:04.919 "dma_device_type": 1 00:05:04.919 }, 00:05:04.919 { 00:05:04.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.919 "dma_device_type": 2 00:05:04.919 } 00:05:04.919 ], 00:05:04.919 "driver_specific": {} 00:05:04.919 } 00:05:04.919 ]' 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.919 [2024-07-12 11:10:51.171170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.919 [2024-07-12 11:10:51.171224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.919 [2024-07-12 11:10:51.171243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000024080 00:05:04.919 [2024-07-12 11:10:51.171257] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.919 [2024-07-12 11:10:51.173106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.919 [2024-07-12 11:10:51.173136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.919 Passthru0 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.919 { 00:05:04.919 "name": "Malloc2", 00:05:04.919 "aliases": [ 00:05:04.919 "b82fecfb-2c12-4892-bc4b-cbbefe9172a3" 00:05:04.919 ], 00:05:04.919 "product_name": "Malloc disk", 00:05:04.919 "block_size": 512, 00:05:04.919 "num_blocks": 16384, 00:05:04.919 "uuid": "b82fecfb-2c12-4892-bc4b-cbbefe9172a3", 00:05:04.919 "assigned_rate_limits": { 00:05:04.919 "rw_ios_per_sec": 0, 00:05:04.919 "rw_mbytes_per_sec": 0, 00:05:04.919 "r_mbytes_per_sec": 0, 00:05:04.919 "w_mbytes_per_sec": 0 00:05:04.919 }, 00:05:04.919 "claimed": true, 00:05:04.919 "claim_type": "exclusive_write", 00:05:04.919 "zoned": false, 00:05:04.919 "supported_io_types": { 00:05:04.919 "read": true, 00:05:04.919 "write": true, 00:05:04.919 "unmap": true, 00:05:04.919 "flush": true, 00:05:04.919 "reset": true, 00:05:04.919 "nvme_admin": false, 00:05:04.919 "nvme_io": false, 00:05:04.919 "nvme_io_md": false, 00:05:04.919 "write_zeroes": true, 00:05:04.919 "zcopy": true, 00:05:04.919 "get_zone_info": false, 00:05:04.919 "zone_management": false, 00:05:04.919 "zone_append": false, 00:05:04.919 "compare": false, 00:05:04.919 "compare_and_write": false, 00:05:04.919 "abort": true, 00:05:04.919 "seek_hole": false, 00:05:04.919 "seek_data": false, 00:05:04.919 "copy": true, 00:05:04.919 "nvme_iov_md": false 00:05:04.919 }, 00:05:04.919 "memory_domains": [ 00:05:04.919 { 00:05:04.919 "dma_device_id": "system", 00:05:04.919 "dma_device_type": 1 00:05:04.919 }, 00:05:04.919 { 00:05:04.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.919 "dma_device_type": 2 00:05:04.919 } 00:05:04.919 ], 00:05:04.919 "driver_specific": {} 00:05:04.919 }, 00:05:04.919 { 00:05:04.919 "name": "Passthru0", 00:05:04.919 "aliases": [ 00:05:04.919 "7621ad48-76d4-594f-9913-4520ee4d2e44" 00:05:04.919 ], 00:05:04.919 "product_name": "passthru", 00:05:04.919 "block_size": 512, 00:05:04.919 "num_blocks": 16384, 00:05:04.919 "uuid": "7621ad48-76d4-594f-9913-4520ee4d2e44", 00:05:04.919 "assigned_rate_limits": { 00:05:04.919 "rw_ios_per_sec": 0, 00:05:04.919 "rw_mbytes_per_sec": 0, 00:05:04.919 "r_mbytes_per_sec": 0, 00:05:04.919 "w_mbytes_per_sec": 0 00:05:04.919 }, 00:05:04.919 "claimed": false, 00:05:04.919 "zoned": false, 00:05:04.919 "supported_io_types": { 00:05:04.919 "read": true, 00:05:04.919 "write": true, 00:05:04.919 "unmap": true, 00:05:04.919 "flush": true, 00:05:04.919 "reset": true, 00:05:04.919 "nvme_admin": false, 00:05:04.919 "nvme_io": false, 00:05:04.919 "nvme_io_md": false, 00:05:04.919 "write_zeroes": true, 00:05:04.919 "zcopy": true, 00:05:04.919 "get_zone_info": false, 00:05:04.919 "zone_management": false, 00:05:04.919 "zone_append": false, 00:05:04.919 "compare": false, 00:05:04.919 "compare_and_write": false, 00:05:04.919 "abort": true, 00:05:04.919 "seek_hole": false, 00:05:04.919 "seek_data": false, 00:05:04.919 "copy": true, 00:05:04.919 "nvme_iov_md": false 00:05:04.919 }, 00:05:04.919 "memory_domains": [ 00:05:04.919 { 00:05:04.919 "dma_device_id": "system", 00:05:04.919 "dma_device_type": 1 00:05:04.919 }, 00:05:04.919 { 00:05:04.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.919 "dma_device_type": 2 00:05:04.919 } 00:05:04.919 ], 00:05:04.919 "driver_specific": { 00:05:04.919 "passthru": { 00:05:04.919 "name": "Passthru0", 00:05:04.919 "base_bdev_name": "Malloc2" 00:05:04.919 } 00:05:04.919 } 00:05:04.919 } 00:05:04.919 ]' 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.919 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.178 00:05:05.178 real 0m0.306s 00:05:05.178 user 0m0.175s 00:05:05.178 sys 0m0.035s 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.178 11:10:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.178 ************************************ 00:05:05.178 END TEST rpc_daemon_integrity 00:05:05.178 ************************************ 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:05.178 11:10:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:05.178 11:10:51 rpc -- rpc/rpc.sh@84 -- # killprocess 725447 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@948 -- # '[' -z 725447 ']' 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@952 -- # kill -0 725447 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@953 -- # uname 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 725447 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 725447' 00:05:05.178 killing process with pid 725447 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@967 -- # kill 725447 00:05:05.178 11:10:51 rpc -- common/autotest_common.sh@972 -- # wait 725447 00:05:07.714 00:05:07.714 real 0m5.047s 00:05:07.714 user 0m5.626s 00:05:07.714 sys 0m0.779s 00:05:07.714 11:10:53 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.714 11:10:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.714 ************************************ 00:05:07.714 END TEST rpc 00:05:07.714 ************************************ 00:05:07.714 11:10:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.714 11:10:53 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:07.714 11:10:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.714 11:10:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.714 11:10:53 -- common/autotest_common.sh@10 -- # set +x 00:05:07.714 ************************************ 00:05:07.714 START TEST skip_rpc 00:05:07.714 ************************************ 00:05:07.714 11:10:53 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:07.714 * Looking for test storage... 00:05:07.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:07.714 11:10:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.714 11:10:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:07.714 11:10:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:07.714 11:10:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.714 11:10:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.714 11:10:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.714 ************************************ 00:05:07.714 START TEST skip_rpc 00:05:07.714 ************************************ 00:05:07.714 11:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:07.714 11:10:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=726517 00:05:07.714 11:10:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.714 11:10:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:07.714 11:10:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:07.973 [2024-07-12 11:10:54.087348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:07.973 [2024-07-12 11:10:54.087444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726517 ] 00:05:07.973 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.973 [2024-07-12 11:10:54.187591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.232 [2024-07-12 11:10:54.398828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 726517 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 726517 ']' 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 726517 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 726517 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 726517' 00:05:13.546 killing process with pid 726517 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 726517 00:05:13.546 11:10:59 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 726517 00:05:15.451 00:05:15.451 real 0m7.480s 00:05:15.451 user 0m7.118s 00:05:15.451 sys 0m0.377s 00:05:15.451 11:11:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.451 11:11:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.451 ************************************ 00:05:15.451 END TEST skip_rpc 00:05:15.451 ************************************ 00:05:15.451 11:11:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:15.451 11:11:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:15.451 11:11:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.451 11:11:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.451 11:11:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.451 ************************************ 00:05:15.451 START TEST skip_rpc_with_json 00:05:15.451 ************************************ 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=727723 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 727723 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 727723 ']' 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.451 11:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.451 [2024-07-12 11:11:01.633244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:15.451 [2024-07-12 11:11:01.633348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727723 ] 00:05:15.451 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.451 [2024-07-12 11:11:01.738425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.711 [2024-07-12 11:11:01.941240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.649 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.649 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:16.649 11:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.650 [2024-07-12 11:11:02.850519] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:16.650 request: 00:05:16.650 { 00:05:16.650 "trtype": "tcp", 00:05:16.650 "method": "nvmf_get_transports", 00:05:16.650 "req_id": 1 00:05:16.650 } 00:05:16.650 Got JSON-RPC error response 00:05:16.650 response: 00:05:16.650 { 00:05:16.650 "code": -19, 00:05:16.650 "message": "No such device" 00:05:16.650 } 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.650 [2024-07-12 11:11:02.858612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.650 11:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.650 { 00:05:16.650 "subsystems": [ 00:05:16.650 { 00:05:16.650 "subsystem": "keyring", 00:05:16.650 "config": [] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "iobuf", 00:05:16.650 "config": [ 00:05:16.650 { 00:05:16.650 "method": "iobuf_set_options", 00:05:16.650 "params": { 00:05:16.650 "small_pool_count": 8192, 00:05:16.650 "large_pool_count": 1024, 00:05:16.650 "small_bufsize": 8192, 00:05:16.650 "large_bufsize": 135168 00:05:16.650 } 00:05:16.650 } 00:05:16.650 ] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "sock", 00:05:16.650 "config": [ 00:05:16.650 { 00:05:16.650 "method": "sock_set_default_impl", 00:05:16.650 "params": { 00:05:16.650 "impl_name": "posix" 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "sock_impl_set_options", 00:05:16.650 "params": { 00:05:16.650 "impl_name": "ssl", 00:05:16.650 "recv_buf_size": 4096, 00:05:16.650 "send_buf_size": 4096, 00:05:16.650 "enable_recv_pipe": true, 00:05:16.650 "enable_quickack": false, 00:05:16.650 "enable_placement_id": 0, 00:05:16.650 "enable_zerocopy_send_server": true, 00:05:16.650 "enable_zerocopy_send_client": false, 00:05:16.650 "zerocopy_threshold": 0, 00:05:16.650 "tls_version": 0, 00:05:16.650 "enable_ktls": false 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "sock_impl_set_options", 00:05:16.650 "params": { 00:05:16.650 "impl_name": "posix", 00:05:16.650 "recv_buf_size": 2097152, 00:05:16.650 "send_buf_size": 2097152, 00:05:16.650 "enable_recv_pipe": true, 00:05:16.650 "enable_quickack": false, 00:05:16.650 "enable_placement_id": 0, 00:05:16.650 "enable_zerocopy_send_server": true, 00:05:16.650 "enable_zerocopy_send_client": false, 00:05:16.650 "zerocopy_threshold": 0, 00:05:16.650 "tls_version": 0, 00:05:16.650 "enable_ktls": false 00:05:16.650 } 00:05:16.650 } 00:05:16.650 ] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "vmd", 00:05:16.650 "config": [] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "accel", 00:05:16.650 "config": [ 00:05:16.650 { 00:05:16.650 "method": "accel_set_options", 00:05:16.650 "params": { 00:05:16.650 "small_cache_size": 128, 00:05:16.650 "large_cache_size": 16, 00:05:16.650 "task_count": 2048, 00:05:16.650 "sequence_count": 2048, 00:05:16.650 "buf_count": 2048 00:05:16.650 } 00:05:16.650 } 00:05:16.650 ] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "bdev", 00:05:16.650 "config": [ 00:05:16.650 { 00:05:16.650 "method": "bdev_set_options", 00:05:16.650 "params": { 00:05:16.650 "bdev_io_pool_size": 65535, 00:05:16.650 "bdev_io_cache_size": 256, 00:05:16.650 "bdev_auto_examine": true, 00:05:16.650 "iobuf_small_cache_size": 128, 00:05:16.650 "iobuf_large_cache_size": 16 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "bdev_raid_set_options", 00:05:16.650 "params": { 00:05:16.650 "process_window_size_kb": 1024 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "bdev_iscsi_set_options", 00:05:16.650 "params": { 00:05:16.650 "timeout_sec": 30 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "bdev_nvme_set_options", 00:05:16.650 "params": { 00:05:16.650 "action_on_timeout": "none", 00:05:16.650 "timeout_us": 0, 00:05:16.650 "timeout_admin_us": 0, 00:05:16.650 "keep_alive_timeout_ms": 10000, 00:05:16.650 "arbitration_burst": 0, 00:05:16.650 "low_priority_weight": 0, 00:05:16.650 "medium_priority_weight": 0, 00:05:16.650 "high_priority_weight": 0, 00:05:16.650 "nvme_adminq_poll_period_us": 10000, 00:05:16.650 "nvme_ioq_poll_period_us": 0, 00:05:16.650 "io_queue_requests": 0, 00:05:16.650 "delay_cmd_submit": true, 00:05:16.650 "transport_retry_count": 4, 00:05:16.650 "bdev_retry_count": 3, 00:05:16.650 "transport_ack_timeout": 0, 00:05:16.650 "ctrlr_loss_timeout_sec": 0, 00:05:16.650 "reconnect_delay_sec": 0, 00:05:16.650 "fast_io_fail_timeout_sec": 0, 00:05:16.650 "disable_auto_failback": false, 00:05:16.650 "generate_uuids": false, 00:05:16.650 "transport_tos": 0, 00:05:16.650 "nvme_error_stat": false, 00:05:16.650 "rdma_srq_size": 0, 00:05:16.650 "io_path_stat": false, 00:05:16.650 "allow_accel_sequence": false, 00:05:16.650 "rdma_max_cq_size": 0, 00:05:16.650 "rdma_cm_event_timeout_ms": 0, 00:05:16.650 "dhchap_digests": [ 00:05:16.650 "sha256", 00:05:16.650 "sha384", 00:05:16.650 "sha512" 00:05:16.650 ], 00:05:16.650 "dhchap_dhgroups": [ 00:05:16.650 "null", 00:05:16.650 "ffdhe2048", 00:05:16.650 "ffdhe3072", 00:05:16.650 "ffdhe4096", 00:05:16.650 "ffdhe6144", 00:05:16.650 "ffdhe8192" 00:05:16.650 ] 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "bdev_nvme_set_hotplug", 00:05:16.650 "params": { 00:05:16.650 "period_us": 100000, 00:05:16.650 "enable": false 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "bdev_wait_for_examine" 00:05:16.650 } 00:05:16.650 ] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "scsi", 00:05:16.650 "config": null 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "scheduler", 00:05:16.650 "config": [ 00:05:16.650 { 00:05:16.650 "method": "framework_set_scheduler", 00:05:16.650 "params": { 00:05:16.650 "name": "static" 00:05:16.650 } 00:05:16.650 } 00:05:16.650 ] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "vhost_scsi", 00:05:16.650 "config": [] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "vhost_blk", 00:05:16.650 "config": [] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "ublk", 00:05:16.650 "config": [] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "nbd", 00:05:16.650 "config": [] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "nvmf", 00:05:16.650 "config": [ 00:05:16.650 { 00:05:16.650 "method": "nvmf_set_config", 00:05:16.650 "params": { 00:05:16.650 "discovery_filter": "match_any", 00:05:16.650 "admin_cmd_passthru": { 00:05:16.650 "identify_ctrlr": false 00:05:16.650 } 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "nvmf_set_max_subsystems", 00:05:16.650 "params": { 00:05:16.650 "max_subsystems": 1024 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "nvmf_set_crdt", 00:05:16.650 "params": { 00:05:16.650 "crdt1": 0, 00:05:16.650 "crdt2": 0, 00:05:16.650 "crdt3": 0 00:05:16.650 } 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "method": "nvmf_create_transport", 00:05:16.650 "params": { 00:05:16.650 "trtype": "TCP", 00:05:16.650 "max_queue_depth": 128, 00:05:16.650 "max_io_qpairs_per_ctrlr": 127, 00:05:16.650 "in_capsule_data_size": 4096, 00:05:16.650 "max_io_size": 131072, 00:05:16.650 "io_unit_size": 131072, 00:05:16.650 "max_aq_depth": 128, 00:05:16.650 "num_shared_buffers": 511, 00:05:16.650 "buf_cache_size": 4294967295, 00:05:16.650 "dif_insert_or_strip": false, 00:05:16.650 "zcopy": false, 00:05:16.650 "c2h_success": true, 00:05:16.650 "sock_priority": 0, 00:05:16.650 "abort_timeout_sec": 1, 00:05:16.650 "ack_timeout": 0, 00:05:16.650 "data_wr_pool_size": 0 00:05:16.650 } 00:05:16.650 } 00:05:16.650 ] 00:05:16.650 }, 00:05:16.650 { 00:05:16.650 "subsystem": "iscsi", 00:05:16.650 "config": [ 00:05:16.650 { 00:05:16.650 "method": "iscsi_set_options", 00:05:16.650 "params": { 00:05:16.650 "node_base": "iqn.2016-06.io.spdk", 00:05:16.650 "max_sessions": 128, 00:05:16.650 "max_connections_per_session": 2, 00:05:16.650 "max_queue_depth": 64, 00:05:16.650 "default_time2wait": 2, 00:05:16.650 "default_time2retain": 20, 00:05:16.651 "first_burst_length": 8192, 00:05:16.651 "immediate_data": true, 00:05:16.651 "allow_duplicated_isid": false, 00:05:16.651 "error_recovery_level": 0, 00:05:16.651 "nop_timeout": 60, 00:05:16.651 "nop_in_interval": 30, 00:05:16.651 "disable_chap": false, 00:05:16.651 "require_chap": false, 00:05:16.651 "mutual_chap": false, 00:05:16.651 "chap_group": 0, 00:05:16.651 "max_large_datain_per_connection": 64, 00:05:16.651 "max_r2t_per_connection": 4, 00:05:16.651 "pdu_pool_size": 36864, 00:05:16.651 "immediate_data_pool_size": 16384, 00:05:16.651 "data_out_pool_size": 2048 00:05:16.651 } 00:05:16.651 } 00:05:16.651 ] 00:05:16.651 } 00:05:16.651 ] 00:05:16.651 } 00:05:16.651 11:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:16.651 11:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 727723 00:05:16.651 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 727723 ']' 00:05:16.651 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 727723 00:05:16.651 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:16.651 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.909 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727723 00:05:16.909 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.909 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.909 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727723' 00:05:16.909 killing process with pid 727723 00:05:16.909 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 727723 00:05:16.909 11:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 727723 00:05:19.443 11:11:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=728422 00:05:19.443 11:11:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:19.443 11:11:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 728422 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 728422 ']' 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 728422 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 728422 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 728422' 00:05:24.751 killing process with pid 728422 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 728422 00:05:24.751 11:11:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 728422 00:05:26.654 11:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.654 11:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.654 00:05:26.654 real 0m11.403s 00:05:26.654 user 0m10.959s 00:05:26.654 sys 0m0.817s 00:05:26.654 11:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.654 11:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.654 ************************************ 00:05:26.654 END TEST skip_rpc_with_json 00:05:26.654 ************************************ 00:05:26.654 11:11:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.654 11:11:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.654 11:11:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.654 11:11:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.654 11:11:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.913 ************************************ 00:05:26.913 START TEST skip_rpc_with_delay 00:05:26.913 ************************************ 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.913 [2024-07-12 11:11:13.101873] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.913 [2024-07-12 11:11:13.101957] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.913 00:05:26.913 real 0m0.135s 00:05:26.913 user 0m0.068s 00:05:26.913 sys 0m0.066s 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.913 11:11:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.913 ************************************ 00:05:26.913 END TEST skip_rpc_with_delay 00:05:26.913 ************************************ 00:05:26.913 11:11:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.913 11:11:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.913 11:11:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.913 11:11:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.913 11:11:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.913 11:11:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.913 11:11:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.913 ************************************ 00:05:26.913 START TEST exit_on_failed_rpc_init 00:05:26.913 ************************************ 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=729846 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 729846 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 729846 ']' 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.913 11:11:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.171 [2024-07-12 11:11:13.303849] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:27.171 [2024-07-12 11:11:13.303941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729846 ] 00:05:27.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.171 [2024-07-12 11:11:13.406416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.430 [2024-07-12 11:11:13.614689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.365 11:11:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.365 [2024-07-12 11:11:14.599900] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:28.365 [2024-07-12 11:11:14.599991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730082 ] 00:05:28.365 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.365 [2024-07-12 11:11:14.702041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.624 [2024-07-12 11:11:14.923855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.624 [2024-07-12 11:11:14.923933] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:28.624 [2024-07-12 11:11:14.923949] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:28.624 [2024-07-12 11:11:14.923960] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 729846 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 729846 ']' 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 729846 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729846 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729846' 00:05:29.191 killing process with pid 729846 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 729846 00:05:29.191 11:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 729846 00:05:31.726 00:05:31.726 real 0m4.617s 00:05:31.726 user 0m5.244s 00:05:31.726 sys 0m0.571s 00:05:31.726 11:11:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.726 11:11:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.726 ************************************ 00:05:31.726 END TEST exit_on_failed_rpc_init 00:05:31.726 ************************************ 00:05:31.726 11:11:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.726 11:11:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.726 00:05:31.726 real 0m23.982s 00:05:31.726 user 0m23.515s 00:05:31.726 sys 0m2.073s 00:05:31.726 11:11:17 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.726 11:11:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.726 ************************************ 00:05:31.726 END TEST skip_rpc 00:05:31.726 ************************************ 00:05:31.726 11:11:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.726 11:11:17 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.726 11:11:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.726 11:11:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.726 11:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:31.726 ************************************ 00:05:31.726 START TEST rpc_client 00:05:31.726 ************************************ 00:05:31.726 11:11:17 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.726 * Looking for test storage... 00:05:31.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:31.726 11:11:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.726 OK 00:05:31.985 11:11:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.985 00:05:31.985 real 0m0.147s 00:05:31.985 user 0m0.077s 00:05:31.985 sys 0m0.079s 00:05:31.985 11:11:18 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.985 11:11:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.985 ************************************ 00:05:31.985 END TEST rpc_client 00:05:31.985 ************************************ 00:05:31.985 11:11:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.985 11:11:18 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.985 11:11:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.985 11:11:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.985 11:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.985 ************************************ 00:05:31.985 START TEST json_config 00:05:31.985 ************************************ 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.985 11:11:18 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.985 11:11:18 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.985 11:11:18 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.985 11:11:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.985 11:11:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.985 11:11:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.985 11:11:18 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.985 11:11:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@47 -- # : 0 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.985 11:11:18 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:31.985 INFO: JSON configuration test init 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.985 11:11:18 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.985 11:11:18 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.985 11:11:18 json_config -- json_config/common.sh@10 -- # shift 00:05:31.985 11:11:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.985 11:11:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.985 11:11:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.985 11:11:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.985 11:11:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.985 11:11:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=730694 00:05:31.985 11:11:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.985 Waiting for target to run... 00:05:31.985 11:11:18 json_config -- json_config/common.sh@25 -- # waitforlisten 730694 /var/tmp/spdk_tgt.sock 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@829 -- # '[' -z 730694 ']' 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.985 11:11:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.985 11:11:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.985 [2024-07-12 11:11:18.335247] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:31.985 [2024-07-12 11:11:18.335352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid730694 ] 00:05:32.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.503 [2024-07-12 11:11:18.652000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.503 [2024-07-12 11:11:18.851764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.762 11:11:19 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.762 11:11:19 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.762 11:11:19 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.762 00:05:32.762 11:11:19 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:32.762 11:11:19 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:32.762 11:11:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.762 11:11:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.762 11:11:19 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:32.762 11:11:19 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:32.762 11:11:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.762 11:11:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.021 11:11:19 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:33.021 11:11:19 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:33.021 11:11:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.232 11:11:22 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:37.232 11:11:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:37.232 11:11:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.232 11:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.232 11:11:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:37.232 11:11:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.232 11:11:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:37.232 11:11:22 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:37.232 11:11:22 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:37.232 11:11:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:37.232 11:11:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.232 11:11:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:37.232 11:11:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.232 11:11:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.232 11:11:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.232 MallocForNvmf0 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.232 11:11:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.232 MallocForNvmf1 00:05:37.232 11:11:23 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.232 11:11:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.504 [2024-07-12 11:11:23.634661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.504 11:11:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.504 11:11:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.504 11:11:23 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.504 11:11:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.832 11:11:23 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:37.832 11:11:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:37.832 11:11:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:37.832 11:11:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.129 [2024-07-12 11:11:24.312908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.129 11:11:24 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:38.129 11:11:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.129 11:11:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.129 11:11:24 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:38.129 11:11:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.129 11:11:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.129 11:11:24 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:38.129 11:11:24 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.129 11:11:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.387 MallocBdevForConfigChangeCheck 00:05:38.387 11:11:24 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:38.387 11:11:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.387 11:11:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.387 11:11:24 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:38.387 11:11:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.644 11:11:24 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:38.644 INFO: shutting down applications... 00:05:38.644 11:11:24 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:38.644 11:11:24 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:38.644 11:11:24 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:38.644 11:11:24 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:40.544 Calling clear_iscsi_subsystem 00:05:40.544 Calling clear_nvmf_subsystem 00:05:40.544 Calling clear_nbd_subsystem 00:05:40.544 Calling clear_ublk_subsystem 00:05:40.544 Calling clear_vhost_blk_subsystem 00:05:40.544 Calling clear_vhost_scsi_subsystem 00:05:40.544 Calling clear_bdev_subsystem 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@345 -- # break 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:40.544 11:11:26 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:40.544 11:11:26 json_config -- json_config/common.sh@31 -- # local app=target 00:05:40.544 11:11:26 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.544 11:11:26 json_config -- json_config/common.sh@35 -- # [[ -n 730694 ]] 00:05:40.544 11:11:26 json_config -- json_config/common.sh@38 -- # kill -SIGINT 730694 00:05:40.544 11:11:26 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.544 11:11:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.544 11:11:26 json_config -- json_config/common.sh@41 -- # kill -0 730694 00:05:40.544 11:11:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.111 11:11:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.111 11:11:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.111 11:11:27 json_config -- json_config/common.sh@41 -- # kill -0 730694 00:05:41.111 11:11:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.677 11:11:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.677 11:11:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.677 11:11:27 json_config -- json_config/common.sh@41 -- # kill -0 730694 00:05:41.677 11:11:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.677 11:11:27 json_config -- json_config/common.sh@43 -- # break 00:05:41.677 11:11:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.677 11:11:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.677 SPDK target shutdown done 00:05:41.677 11:11:27 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:41.677 INFO: relaunching applications... 00:05:41.677 11:11:27 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.677 11:11:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.677 11:11:27 json_config -- json_config/common.sh@10 -- # shift 00:05:41.677 11:11:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.677 11:11:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.677 11:11:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.677 11:11:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.677 11:11:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.677 11:11:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=732430 00:05:41.677 11:11:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.677 Waiting for target to run... 00:05:41.677 11:11:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.677 11:11:27 json_config -- json_config/common.sh@25 -- # waitforlisten 732430 /var/tmp/spdk_tgt.sock 00:05:41.677 11:11:27 json_config -- common/autotest_common.sh@829 -- # '[' -z 732430 ']' 00:05:41.677 11:11:27 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.677 11:11:27 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.677 11:11:27 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.677 11:11:27 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.677 11:11:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.677 [2024-07-12 11:11:27.873295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:41.677 [2024-07-12 11:11:27.873406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732430 ] 00:05:41.677 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.243 [2024-07-12 11:11:28.361180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.243 [2024-07-12 11:11:28.576019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.445 [2024-07-12 11:11:32.362245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.445 [2024-07-12 11:11:32.394581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:46.445 11:11:32 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.445 11:11:32 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:46.445 11:11:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:46.445 00:05:46.445 11:11:32 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:46.445 11:11:32 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:46.445 INFO: Checking if target configuration is the same... 00:05:46.445 11:11:32 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.445 11:11:32 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:46.445 11:11:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.445 + '[' 2 -ne 2 ']' 00:05:46.445 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.445 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.445 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.445 +++ basename /dev/fd/62 00:05:46.445 ++ mktemp /tmp/62.XXX 00:05:46.445 + tmp_file_1=/tmp/62.feo 00:05:46.445 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.445 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.445 + tmp_file_2=/tmp/spdk_tgt_config.json.gyi 00:05:46.445 + ret=0 00:05:46.445 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.704 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.704 + diff -u /tmp/62.feo /tmp/spdk_tgt_config.json.gyi 00:05:46.704 + echo 'INFO: JSON config files are the same' 00:05:46.704 INFO: JSON config files are the same 00:05:46.704 + rm /tmp/62.feo /tmp/spdk_tgt_config.json.gyi 00:05:46.704 + exit 0 00:05:46.704 11:11:32 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:46.704 11:11:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:46.704 INFO: changing configuration and checking if this can be detected... 00:05:46.704 11:11:32 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.704 11:11:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.962 11:11:33 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:46.962 11:11:33 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.962 11:11:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.962 + '[' 2 -ne 2 ']' 00:05:46.962 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.962 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.962 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.962 +++ basename /dev/fd/62 00:05:46.962 ++ mktemp /tmp/62.XXX 00:05:46.962 + tmp_file_1=/tmp/62.nEf 00:05:46.962 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.962 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.962 + tmp_file_2=/tmp/spdk_tgt_config.json.3XW 00:05:46.962 + ret=0 00:05:46.962 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.220 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.220 + diff -u /tmp/62.nEf /tmp/spdk_tgt_config.json.3XW 00:05:47.220 + ret=1 00:05:47.220 + echo '=== Start of file: /tmp/62.nEf ===' 00:05:47.220 + cat /tmp/62.nEf 00:05:47.220 + echo '=== End of file: /tmp/62.nEf ===' 00:05:47.220 + echo '' 00:05:47.220 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3XW ===' 00:05:47.220 + cat /tmp/spdk_tgt_config.json.3XW 00:05:47.220 + echo '=== End of file: /tmp/spdk_tgt_config.json.3XW ===' 00:05:47.220 + echo '' 00:05:47.220 + rm /tmp/62.nEf /tmp/spdk_tgt_config.json.3XW 00:05:47.220 + exit 1 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:47.220 INFO: configuration change detected. 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@317 -- # [[ -n 732430 ]] 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.220 11:11:33 json_config -- json_config/json_config.sh@323 -- # killprocess 732430 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@948 -- # '[' -z 732430 ']' 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@952 -- # kill -0 732430 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@953 -- # uname 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.220 11:11:33 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 732430 00:05:47.478 11:11:33 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.479 11:11:33 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.479 11:11:33 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 732430' 00:05:47.479 killing process with pid 732430 00:05:47.479 11:11:33 json_config -- common/autotest_common.sh@967 -- # kill 732430 00:05:47.479 11:11:33 json_config -- common/autotest_common.sh@972 -- # wait 732430 00:05:50.011 11:11:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.011 11:11:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:50.011 11:11:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.011 11:11:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.011 11:11:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:50.011 11:11:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:50.011 INFO: Success 00:05:50.011 00:05:50.011 real 0m17.741s 00:05:50.011 user 0m18.488s 00:05:50.011 sys 0m2.125s 00:05:50.011 11:11:35 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.011 11:11:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.011 ************************************ 00:05:50.011 END TEST json_config 00:05:50.011 ************************************ 00:05:50.011 11:11:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.011 11:11:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:50.011 11:11:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.011 11:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.011 11:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.011 ************************************ 00:05:50.011 START TEST json_config_extra_key 00:05:50.012 ************************************ 00:05:50.012 11:11:35 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.012 11:11:36 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.012 11:11:36 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.012 11:11:36 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.012 11:11:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.012 11:11:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.012 11:11:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.012 11:11:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:50.012 11:11:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:50.012 11:11:36 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:50.012 INFO: launching applications... 00:05:50.012 11:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=733999 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.012 Waiting for target to run... 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 733999 /var/tmp/spdk_tgt.sock 00:05:50.012 11:11:36 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 733999 ']' 00:05:50.012 11:11:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:50.012 11:11:36 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.012 11:11:36 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.012 11:11:36 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.012 11:11:36 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.012 11:11:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.012 [2024-07-12 11:11:36.132364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:50.012 [2024-07-12 11:11:36.132468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733999 ] 00:05:50.012 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.272 [2024-07-12 11:11:36.459884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.531 [2024-07-12 11:11:36.658698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.099 11:11:37 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.099 11:11:37 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:51.099 00:05:51.099 11:11:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:51.099 INFO: shutting down applications... 00:05:51.099 11:11:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 733999 ]] 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 733999 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 733999 00:05:51.099 11:11:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.666 11:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.666 11:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.666 11:11:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 733999 00:05:51.666 11:11:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.234 11:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.234 11:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.234 11:11:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 733999 00:05:52.234 11:11:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.801 11:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.801 11:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.801 11:11:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 733999 00:05:52.801 11:11:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.059 11:11:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.059 11:11:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.059 11:11:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 733999 00:05:53.059 11:11:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.626 11:11:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.626 11:11:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.626 11:11:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 733999 00:05:53.626 11:11:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.194 11:11:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.194 11:11:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.194 11:11:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 733999 00:05:54.194 11:11:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.194 11:11:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:54.194 11:11:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.194 11:11:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.194 SPDK target shutdown done 00:05:54.194 11:11:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:54.194 Success 00:05:54.194 00:05:54.194 real 0m4.462s 00:05:54.194 user 0m4.104s 00:05:54.194 sys 0m0.523s 00:05:54.194 11:11:40 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.194 11:11:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.194 ************************************ 00:05:54.194 END TEST json_config_extra_key 00:05:54.194 ************************************ 00:05:54.194 11:11:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.194 11:11:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.194 11:11:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.194 11:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.194 11:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:54.194 ************************************ 00:05:54.194 START TEST alias_rpc 00:05:54.194 ************************************ 00:05:54.194 11:11:40 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.453 * Looking for test storage... 00:05:54.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:54.453 11:11:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.453 11:11:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=734874 00:05:54.453 11:11:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.453 11:11:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 734874 00:05:54.453 11:11:40 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 734874 ']' 00:05:54.453 11:11:40 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.453 11:11:40 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.454 11:11:40 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.454 11:11:40 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.454 11:11:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.454 [2024-07-12 11:11:40.657650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:54.454 [2024-07-12 11:11:40.657770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734874 ] 00:05:54.454 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.454 [2024-07-12 11:11:40.764074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.712 [2024-07-12 11:11:40.981232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.647 11:11:41 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.647 11:11:41 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.647 11:11:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:55.905 11:11:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 734874 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 734874 ']' 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 734874 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 734874 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 734874' 00:05:55.905 killing process with pid 734874 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@967 -- # kill 734874 00:05:55.905 11:11:42 alias_rpc -- common/autotest_common.sh@972 -- # wait 734874 00:05:58.434 00:05:58.434 real 0m4.097s 00:05:58.434 user 0m4.116s 00:05:58.434 sys 0m0.501s 00:05:58.434 11:11:44 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.434 11:11:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.434 ************************************ 00:05:58.434 END TEST alias_rpc 00:05:58.434 ************************************ 00:05:58.434 11:11:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.434 11:11:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:58.434 11:11:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.434 11:11:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.434 11:11:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.434 11:11:44 -- common/autotest_common.sh@10 -- # set +x 00:05:58.434 ************************************ 00:05:58.434 START TEST spdkcli_tcp 00:05:58.434 ************************************ 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:58.434 * Looking for test storage... 00:05:58.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=735619 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 735619 00:05:58.434 11:11:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 735619 ']' 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.434 11:11:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.692 [2024-07-12 11:11:44.837363] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:58.692 [2024-07-12 11:11:44.837469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735619 ] 00:05:58.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.692 [2024-07-12 11:11:44.942145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.950 [2024-07-12 11:11:45.154969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.950 [2024-07-12 11:11:45.154979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.885 11:11:46 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.885 11:11:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:59.885 11:11:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=735857 00:05:59.885 11:11:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:59.885 11:11:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.144 [ 00:06:00.144 "bdev_malloc_delete", 00:06:00.144 "bdev_malloc_create", 00:06:00.144 "bdev_null_resize", 00:06:00.144 "bdev_null_delete", 00:06:00.144 "bdev_null_create", 00:06:00.144 "bdev_nvme_cuse_unregister", 00:06:00.144 "bdev_nvme_cuse_register", 00:06:00.144 "bdev_opal_new_user", 00:06:00.144 "bdev_opal_set_lock_state", 00:06:00.144 "bdev_opal_delete", 00:06:00.144 "bdev_opal_get_info", 00:06:00.144 "bdev_opal_create", 00:06:00.144 "bdev_nvme_opal_revert", 00:06:00.144 "bdev_nvme_opal_init", 00:06:00.144 "bdev_nvme_send_cmd", 00:06:00.144 "bdev_nvme_get_path_iostat", 00:06:00.144 "bdev_nvme_get_mdns_discovery_info", 00:06:00.144 "bdev_nvme_stop_mdns_discovery", 00:06:00.144 "bdev_nvme_start_mdns_discovery", 00:06:00.144 "bdev_nvme_set_multipath_policy", 00:06:00.144 "bdev_nvme_set_preferred_path", 00:06:00.144 "bdev_nvme_get_io_paths", 00:06:00.144 "bdev_nvme_remove_error_injection", 00:06:00.144 "bdev_nvme_add_error_injection", 00:06:00.144 "bdev_nvme_get_discovery_info", 00:06:00.144 "bdev_nvme_stop_discovery", 00:06:00.144 "bdev_nvme_start_discovery", 00:06:00.144 "bdev_nvme_get_controller_health_info", 00:06:00.144 "bdev_nvme_disable_controller", 00:06:00.144 "bdev_nvme_enable_controller", 00:06:00.144 "bdev_nvme_reset_controller", 00:06:00.144 "bdev_nvme_get_transport_statistics", 00:06:00.144 "bdev_nvme_apply_firmware", 00:06:00.144 "bdev_nvme_detach_controller", 00:06:00.144 "bdev_nvme_get_controllers", 00:06:00.144 "bdev_nvme_attach_controller", 00:06:00.144 "bdev_nvme_set_hotplug", 00:06:00.144 "bdev_nvme_set_options", 00:06:00.144 "bdev_passthru_delete", 00:06:00.144 "bdev_passthru_create", 00:06:00.144 "bdev_lvol_set_parent_bdev", 00:06:00.144 "bdev_lvol_set_parent", 00:06:00.144 "bdev_lvol_check_shallow_copy", 00:06:00.144 "bdev_lvol_start_shallow_copy", 00:06:00.144 "bdev_lvol_grow_lvstore", 00:06:00.144 "bdev_lvol_get_lvols", 00:06:00.144 "bdev_lvol_get_lvstores", 00:06:00.144 "bdev_lvol_delete", 00:06:00.144 "bdev_lvol_set_read_only", 00:06:00.144 "bdev_lvol_resize", 00:06:00.144 "bdev_lvol_decouple_parent", 00:06:00.144 "bdev_lvol_inflate", 00:06:00.144 "bdev_lvol_rename", 00:06:00.144 "bdev_lvol_clone_bdev", 00:06:00.144 "bdev_lvol_clone", 00:06:00.144 "bdev_lvol_snapshot", 00:06:00.144 "bdev_lvol_create", 00:06:00.144 "bdev_lvol_delete_lvstore", 00:06:00.144 "bdev_lvol_rename_lvstore", 00:06:00.144 "bdev_lvol_create_lvstore", 00:06:00.144 "bdev_raid_set_options", 00:06:00.144 "bdev_raid_remove_base_bdev", 00:06:00.144 "bdev_raid_add_base_bdev", 00:06:00.144 "bdev_raid_delete", 00:06:00.144 "bdev_raid_create", 00:06:00.144 "bdev_raid_get_bdevs", 00:06:00.144 "bdev_error_inject_error", 00:06:00.144 "bdev_error_delete", 00:06:00.144 "bdev_error_create", 00:06:00.144 "bdev_split_delete", 00:06:00.144 "bdev_split_create", 00:06:00.144 "bdev_delay_delete", 00:06:00.144 "bdev_delay_create", 00:06:00.144 "bdev_delay_update_latency", 00:06:00.144 "bdev_zone_block_delete", 00:06:00.144 "bdev_zone_block_create", 00:06:00.144 "blobfs_create", 00:06:00.144 "blobfs_detect", 00:06:00.144 "blobfs_set_cache_size", 00:06:00.144 "bdev_aio_delete", 00:06:00.144 "bdev_aio_rescan", 00:06:00.144 "bdev_aio_create", 00:06:00.144 "bdev_ftl_set_property", 00:06:00.144 "bdev_ftl_get_properties", 00:06:00.144 "bdev_ftl_get_stats", 00:06:00.144 "bdev_ftl_unmap", 00:06:00.144 "bdev_ftl_unload", 00:06:00.144 "bdev_ftl_delete", 00:06:00.144 "bdev_ftl_load", 00:06:00.144 "bdev_ftl_create", 00:06:00.144 "bdev_virtio_attach_controller", 00:06:00.144 "bdev_virtio_scsi_get_devices", 00:06:00.144 "bdev_virtio_detach_controller", 00:06:00.144 "bdev_virtio_blk_set_hotplug", 00:06:00.144 "bdev_iscsi_delete", 00:06:00.144 "bdev_iscsi_create", 00:06:00.144 "bdev_iscsi_set_options", 00:06:00.144 "accel_error_inject_error", 00:06:00.144 "ioat_scan_accel_module", 00:06:00.144 "dsa_scan_accel_module", 00:06:00.144 "iaa_scan_accel_module", 00:06:00.144 "keyring_file_remove_key", 00:06:00.144 "keyring_file_add_key", 00:06:00.144 "keyring_linux_set_options", 00:06:00.144 "iscsi_get_histogram", 00:06:00.144 "iscsi_enable_histogram", 00:06:00.144 "iscsi_set_options", 00:06:00.144 "iscsi_get_auth_groups", 00:06:00.144 "iscsi_auth_group_remove_secret", 00:06:00.144 "iscsi_auth_group_add_secret", 00:06:00.144 "iscsi_delete_auth_group", 00:06:00.144 "iscsi_create_auth_group", 00:06:00.144 "iscsi_set_discovery_auth", 00:06:00.144 "iscsi_get_options", 00:06:00.144 "iscsi_target_node_request_logout", 00:06:00.144 "iscsi_target_node_set_redirect", 00:06:00.144 "iscsi_target_node_set_auth", 00:06:00.144 "iscsi_target_node_add_lun", 00:06:00.144 "iscsi_get_stats", 00:06:00.144 "iscsi_get_connections", 00:06:00.144 "iscsi_portal_group_set_auth", 00:06:00.144 "iscsi_start_portal_group", 00:06:00.144 "iscsi_delete_portal_group", 00:06:00.144 "iscsi_create_portal_group", 00:06:00.144 "iscsi_get_portal_groups", 00:06:00.144 "iscsi_delete_target_node", 00:06:00.144 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.144 "iscsi_target_node_add_pg_ig_maps", 00:06:00.144 "iscsi_create_target_node", 00:06:00.144 "iscsi_get_target_nodes", 00:06:00.144 "iscsi_delete_initiator_group", 00:06:00.144 "iscsi_initiator_group_remove_initiators", 00:06:00.144 "iscsi_initiator_group_add_initiators", 00:06:00.144 "iscsi_create_initiator_group", 00:06:00.144 "iscsi_get_initiator_groups", 00:06:00.144 "nvmf_set_crdt", 00:06:00.144 "nvmf_set_config", 00:06:00.144 "nvmf_set_max_subsystems", 00:06:00.144 "nvmf_stop_mdns_prr", 00:06:00.144 "nvmf_publish_mdns_prr", 00:06:00.144 "nvmf_subsystem_get_listeners", 00:06:00.144 "nvmf_subsystem_get_qpairs", 00:06:00.144 "nvmf_subsystem_get_controllers", 00:06:00.144 "nvmf_get_stats", 00:06:00.144 "nvmf_get_transports", 00:06:00.144 "nvmf_create_transport", 00:06:00.144 "nvmf_get_targets", 00:06:00.144 "nvmf_delete_target", 00:06:00.144 "nvmf_create_target", 00:06:00.144 "nvmf_subsystem_allow_any_host", 00:06:00.144 "nvmf_subsystem_remove_host", 00:06:00.144 "nvmf_subsystem_add_host", 00:06:00.144 "nvmf_ns_remove_host", 00:06:00.144 "nvmf_ns_add_host", 00:06:00.144 "nvmf_subsystem_remove_ns", 00:06:00.144 "nvmf_subsystem_add_ns", 00:06:00.144 "nvmf_subsystem_listener_set_ana_state", 00:06:00.144 "nvmf_discovery_get_referrals", 00:06:00.144 "nvmf_discovery_remove_referral", 00:06:00.144 "nvmf_discovery_add_referral", 00:06:00.144 "nvmf_subsystem_remove_listener", 00:06:00.144 "nvmf_subsystem_add_listener", 00:06:00.144 "nvmf_delete_subsystem", 00:06:00.144 "nvmf_create_subsystem", 00:06:00.144 "nvmf_get_subsystems", 00:06:00.144 "env_dpdk_get_mem_stats", 00:06:00.144 "nbd_get_disks", 00:06:00.144 "nbd_stop_disk", 00:06:00.144 "nbd_start_disk", 00:06:00.144 "ublk_recover_disk", 00:06:00.144 "ublk_get_disks", 00:06:00.144 "ublk_stop_disk", 00:06:00.144 "ublk_start_disk", 00:06:00.144 "ublk_destroy_target", 00:06:00.144 "ublk_create_target", 00:06:00.144 "virtio_blk_create_transport", 00:06:00.144 "virtio_blk_get_transports", 00:06:00.144 "vhost_controller_set_coalescing", 00:06:00.144 "vhost_get_controllers", 00:06:00.144 "vhost_delete_controller", 00:06:00.144 "vhost_create_blk_controller", 00:06:00.144 "vhost_scsi_controller_remove_target", 00:06:00.144 "vhost_scsi_controller_add_target", 00:06:00.144 "vhost_start_scsi_controller", 00:06:00.144 "vhost_create_scsi_controller", 00:06:00.144 "thread_set_cpumask", 00:06:00.144 "framework_get_governor", 00:06:00.144 "framework_get_scheduler", 00:06:00.144 "framework_set_scheduler", 00:06:00.144 "framework_get_reactors", 00:06:00.144 "thread_get_io_channels", 00:06:00.144 "thread_get_pollers", 00:06:00.144 "thread_get_stats", 00:06:00.144 "framework_monitor_context_switch", 00:06:00.144 "spdk_kill_instance", 00:06:00.144 "log_enable_timestamps", 00:06:00.144 "log_get_flags", 00:06:00.144 "log_clear_flag", 00:06:00.144 "log_set_flag", 00:06:00.144 "log_get_level", 00:06:00.144 "log_set_level", 00:06:00.144 "log_get_print_level", 00:06:00.144 "log_set_print_level", 00:06:00.144 "framework_enable_cpumask_locks", 00:06:00.144 "framework_disable_cpumask_locks", 00:06:00.144 "framework_wait_init", 00:06:00.144 "framework_start_init", 00:06:00.144 "scsi_get_devices", 00:06:00.144 "bdev_get_histogram", 00:06:00.144 "bdev_enable_histogram", 00:06:00.144 "bdev_set_qos_limit", 00:06:00.144 "bdev_set_qd_sampling_period", 00:06:00.144 "bdev_get_bdevs", 00:06:00.144 "bdev_reset_iostat", 00:06:00.144 "bdev_get_iostat", 00:06:00.145 "bdev_examine", 00:06:00.145 "bdev_wait_for_examine", 00:06:00.145 "bdev_set_options", 00:06:00.145 "notify_get_notifications", 00:06:00.145 "notify_get_types", 00:06:00.145 "accel_get_stats", 00:06:00.145 "accel_set_options", 00:06:00.145 "accel_set_driver", 00:06:00.145 "accel_crypto_key_destroy", 00:06:00.145 "accel_crypto_keys_get", 00:06:00.145 "accel_crypto_key_create", 00:06:00.145 "accel_assign_opc", 00:06:00.145 "accel_get_module_info", 00:06:00.145 "accel_get_opc_assignments", 00:06:00.145 "vmd_rescan", 00:06:00.145 "vmd_remove_device", 00:06:00.145 "vmd_enable", 00:06:00.145 "sock_get_default_impl", 00:06:00.145 "sock_set_default_impl", 00:06:00.145 "sock_impl_set_options", 00:06:00.145 "sock_impl_get_options", 00:06:00.145 "iobuf_get_stats", 00:06:00.145 "iobuf_set_options", 00:06:00.145 "framework_get_pci_devices", 00:06:00.145 "framework_get_config", 00:06:00.145 "framework_get_subsystems", 00:06:00.145 "trace_get_info", 00:06:00.145 "trace_get_tpoint_group_mask", 00:06:00.145 "trace_disable_tpoint_group", 00:06:00.145 "trace_enable_tpoint_group", 00:06:00.145 "trace_clear_tpoint_mask", 00:06:00.145 "trace_set_tpoint_mask", 00:06:00.145 "keyring_get_keys", 00:06:00.145 "spdk_get_version", 00:06:00.145 "rpc_get_methods" 00:06:00.145 ] 00:06:00.145 11:11:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.145 11:11:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.145 11:11:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 735619 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 735619 ']' 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 735619 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 735619 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 735619' 00:06:00.145 killing process with pid 735619 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 735619 00:06:00.145 11:11:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 735619 00:06:02.677 00:06:02.677 real 0m4.214s 00:06:02.677 user 0m7.493s 00:06:02.677 sys 0m0.565s 00:06:02.677 11:11:48 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.677 11:11:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.677 ************************************ 00:06:02.677 END TEST spdkcli_tcp 00:06:02.677 ************************************ 00:06:02.677 11:11:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.677 11:11:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.677 11:11:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.677 11:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.677 11:11:48 -- common/autotest_common.sh@10 -- # set +x 00:06:02.677 ************************************ 00:06:02.677 START TEST dpdk_mem_utility 00:06:02.677 ************************************ 00:06:02.677 11:11:48 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.677 * Looking for test storage... 00:06:02.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:02.677 11:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.677 11:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=736382 00:06:02.677 11:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 736382 00:06:02.677 11:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.677 11:11:49 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 736382 ']' 00:06:02.677 11:11:49 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.677 11:11:49 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.677 11:11:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.677 11:11:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.677 11:11:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.936 [2024-07-12 11:11:49.106738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:02.936 [2024-07-12 11:11:49.106835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736382 ] 00:06:02.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.936 [2024-07-12 11:11:49.208496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.195 [2024-07-12 11:11:49.415284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.131 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.131 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:04.131 11:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.131 11:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.131 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.131 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.131 { 00:06:04.131 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.131 } 00:06:04.131 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.131 11:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:04.131 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:04.131 1 heaps totaling size 820.000000 MiB 00:06:04.131 size: 820.000000 MiB heap id: 0 00:06:04.131 end heaps---------- 00:06:04.131 8 mempools totaling size 598.116089 MiB 00:06:04.131 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.131 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.131 size: 84.521057 MiB name: bdev_io_736382 00:06:04.131 size: 51.011292 MiB name: evtpool_736382 00:06:04.131 size: 50.003479 MiB name: msgpool_736382 00:06:04.131 size: 21.763794 MiB name: PDU_Pool 00:06:04.131 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.131 size: 0.026123 MiB name: Session_Pool 00:06:04.131 end mempools------- 00:06:04.131 6 memzones totaling size 4.142822 MiB 00:06:04.131 size: 1.000366 MiB name: RG_ring_0_736382 00:06:04.131 size: 1.000366 MiB name: RG_ring_1_736382 00:06:04.131 size: 1.000366 MiB name: RG_ring_4_736382 00:06:04.131 size: 1.000366 MiB name: RG_ring_5_736382 00:06:04.131 size: 0.125366 MiB name: RG_ring_2_736382 00:06:04.131 size: 0.015991 MiB name: RG_ring_3_736382 00:06:04.131 end memzones------- 00:06:04.131 11:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:04.131 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:04.131 list of free elements. size: 18.514832 MiB 00:06:04.131 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:04.131 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:04.131 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:04.131 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:04.131 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:04.131 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:04.131 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:04.131 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:04.131 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:04.131 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:04.131 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:04.131 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:04.131 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:04.131 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:04.131 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:04.131 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:04.131 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:04.131 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:04.131 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:04.131 list of standard malloc elements. size: 199.220764 MiB 00:06:04.131 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:04.131 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:04.131 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:04.131 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:04.131 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:04.131 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:04.131 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:04.131 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:04.131 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:04.131 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:04.131 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:04.131 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:04.131 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:04.131 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:04.131 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:04.131 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:04.131 list of memzone associated elements. size: 602.264404 MiB 00:06:04.131 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:04.131 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:04.131 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:04.131 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:04.131 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:04.131 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_736382_0 00:06:04.131 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:04.131 associated memzone info: size: 48.002930 MiB name: MP_evtpool_736382_0 00:06:04.131 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:04.131 associated memzone info: size: 48.002930 MiB name: MP_msgpool_736382_0 00:06:04.132 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:04.132 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:04.132 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:04.132 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:04.132 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:04.132 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_736382 00:06:04.132 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:04.132 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_736382 00:06:04.132 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:04.132 associated memzone info: size: 1.007996 MiB name: MP_evtpool_736382 00:06:04.132 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:04.132 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:04.132 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:04.132 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:04.132 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:04.132 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:04.132 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:04.132 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:04.132 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:04.132 associated memzone info: size: 1.000366 MiB name: RG_ring_0_736382 00:06:04.132 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:04.132 associated memzone info: size: 1.000366 MiB name: RG_ring_1_736382 00:06:04.132 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:04.132 associated memzone info: size: 1.000366 MiB name: RG_ring_4_736382 00:06:04.132 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:04.132 associated memzone info: size: 1.000366 MiB name: RG_ring_5_736382 00:06:04.132 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:04.132 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_736382 00:06:04.132 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:04.132 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:04.132 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:04.132 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:04.132 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:04.132 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:04.132 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:04.132 associated memzone info: size: 0.125366 MiB name: RG_ring_2_736382 00:06:04.132 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:04.132 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:04.132 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:04.132 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:04.132 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:04.132 associated memzone info: size: 0.015991 MiB name: RG_ring_3_736382 00:06:04.132 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:04.132 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:04.132 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:04.132 associated memzone info: size: 0.000183 MiB name: MP_msgpool_736382 00:06:04.132 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:04.132 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_736382 00:06:04.132 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:04.132 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:04.132 11:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:04.132 11:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 736382 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 736382 ']' 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 736382 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 736382 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 736382' 00:06:04.132 killing process with pid 736382 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 736382 00:06:04.132 11:11:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 736382 00:06:06.665 00:06:06.665 real 0m3.998s 00:06:06.665 user 0m3.954s 00:06:06.665 sys 0m0.508s 00:06:06.665 11:11:52 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.665 11:11:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.665 ************************************ 00:06:06.665 END TEST dpdk_mem_utility 00:06:06.665 ************************************ 00:06:06.665 11:11:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.665 11:11:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.665 11:11:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.665 11:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.665 11:11:52 -- common/autotest_common.sh@10 -- # set +x 00:06:06.665 ************************************ 00:06:06.665 START TEST event 00:06:06.665 ************************************ 00:06:06.665 11:11:52 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.923 * Looking for test storage... 00:06:06.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.923 11:11:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:06.923 11:11:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.923 11:11:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.923 11:11:53 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:06.923 11:11:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.923 11:11:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.923 ************************************ 00:06:06.923 START TEST event_perf 00:06:06.923 ************************************ 00:06:06.923 11:11:53 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.923 Running I/O for 1 seconds...[2024-07-12 11:11:53.159562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:06.923 [2024-07-12 11:11:53.159644] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737128 ] 00:06:06.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.923 [2024-07-12 11:11:53.259910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.181 [2024-07-12 11:11:53.479314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.181 [2024-07-12 11:11:53.479395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.181 [2024-07-12 11:11:53.479448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.181 [2024-07-12 11:11:53.479465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.555 Running I/O for 1 seconds... 00:06:08.555 lcore 0: 196056 00:06:08.555 lcore 1: 196055 00:06:08.555 lcore 2: 196055 00:06:08.555 lcore 3: 196056 00:06:08.555 done. 00:06:08.555 00:06:08.555 real 0m1.773s 00:06:08.555 user 0m4.628s 00:06:08.555 sys 0m0.139s 00:06:08.555 11:11:54 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.555 11:11:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.555 ************************************ 00:06:08.555 END TEST event_perf 00:06:08.555 ************************************ 00:06:08.813 11:11:54 event -- common/autotest_common.sh@1142 -- # return 0 00:06:08.813 11:11:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.813 11:11:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:08.813 11:11:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.813 11:11:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.813 ************************************ 00:06:08.813 START TEST event_reactor 00:06:08.813 ************************************ 00:06:08.813 11:11:54 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.813 [2024-07-12 11:11:55.003721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:08.813 [2024-07-12 11:11:55.003813] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737395 ] 00:06:08.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.813 [2024-07-12 11:11:55.107240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.071 [2024-07-12 11:11:55.321388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.444 test_start 00:06:10.444 oneshot 00:06:10.444 tick 100 00:06:10.444 tick 100 00:06:10.444 tick 250 00:06:10.444 tick 100 00:06:10.444 tick 100 00:06:10.444 tick 100 00:06:10.444 tick 250 00:06:10.444 tick 500 00:06:10.444 tick 100 00:06:10.444 tick 100 00:06:10.444 tick 250 00:06:10.444 tick 100 00:06:10.444 tick 100 00:06:10.444 test_end 00:06:10.444 00:06:10.444 real 0m1.774s 00:06:10.444 user 0m1.633s 00:06:10.444 sys 0m0.134s 00:06:10.444 11:11:56 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.444 11:11:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:10.444 ************************************ 00:06:10.444 END TEST event_reactor 00:06:10.444 ************************************ 00:06:10.444 11:11:56 event -- common/autotest_common.sh@1142 -- # return 0 00:06:10.444 11:11:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.444 11:11:56 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:10.445 11:11:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.445 11:11:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.703 ************************************ 00:06:10.703 START TEST event_reactor_perf 00:06:10.703 ************************************ 00:06:10.703 11:11:56 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.703 [2024-07-12 11:11:56.847102] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:10.703 [2024-07-12 11:11:56.847184] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737863 ] 00:06:10.703 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.703 [2024-07-12 11:11:56.946448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.962 [2024-07-12 11:11:57.158744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.338 test_start 00:06:12.338 test_end 00:06:12.338 Performance: 380043 events per second 00:06:12.338 00:06:12.338 real 0m1.760s 00:06:12.338 user 0m1.621s 00:06:12.338 sys 0m0.131s 00:06:12.338 11:11:58 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.338 11:11:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.338 ************************************ 00:06:12.338 END TEST event_reactor_perf 00:06:12.338 ************************************ 00:06:12.338 11:11:58 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.338 11:11:58 event -- event/event.sh@49 -- # uname -s 00:06:12.338 11:11:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.338 11:11:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.338 11:11:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.338 11:11:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.338 11:11:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.338 ************************************ 00:06:12.338 START TEST event_scheduler 00:06:12.338 ************************************ 00:06:12.338 11:11:58 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.597 * Looking for test storage... 00:06:12.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:12.597 11:11:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.597 11:11:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=738148 00:06:12.597 11:11:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.597 11:11:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.597 11:11:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 738148 00:06:12.597 11:11:58 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 738148 ']' 00:06:12.597 11:11:58 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.597 11:11:58 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.597 11:11:58 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.597 11:11:58 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.597 11:11:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.597 [2024-07-12 11:11:58.805386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:12.597 [2024-07-12 11:11:58.805493] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738148 ] 00:06:12.597 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.597 [2024-07-12 11:11:58.904806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.856 [2024-07-12 11:11:59.124958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.856 [2024-07-12 11:11:59.125013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.856 [2024-07-12 11:11:59.125072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.856 [2024-07-12 11:11:59.125081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.423 11:11:59 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.423 11:11:59 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:13.423 11:11:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:13.423 11:11:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.423 11:11:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.423 [2024-07-12 11:11:59.595239] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:13.423 [2024-07-12 11:11:59.595268] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:13.423 [2024-07-12 11:11:59.595287] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:13.423 [2024-07-12 11:11:59.595298] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:13.423 [2024-07-12 11:11:59.595306] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:13.423 11:11:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.423 11:11:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:13.423 11:11:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.423 11:11:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 [2024-07-12 11:11:59.943138] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:13.682 11:11:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.682 11:11:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:13.682 11:11:59 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.682 11:11:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.682 11:11:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 ************************************ 00:06:13.682 START TEST scheduler_create_thread 00:06:13.682 ************************************ 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 2 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 3 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.682 11:11:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 4 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 5 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 6 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.682 7 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.682 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.941 8 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.941 9 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.941 10 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.941 11:12:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.318 11:12:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.318 11:12:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.318 11:12:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.319 11:12:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.319 11:12:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.253 11:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.253 00:06:16.253 real 0m2.627s 00:06:16.253 user 0m0.024s 00:06:16.253 sys 0m0.004s 00:06:16.253 11:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.253 11:12:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.253 ************************************ 00:06:16.253 END TEST scheduler_create_thread 00:06:16.253 ************************************ 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:16.512 11:12:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.512 11:12:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 738148 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 738148 ']' 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 738148 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738148 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738148' 00:06:16.512 killing process with pid 738148 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 738148 00:06:16.512 11:12:02 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 738148 00:06:16.779 [2024-07-12 11:12:03.084660] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:18.159 00:06:18.159 real 0m5.787s 00:06:18.159 user 0m9.732s 00:06:18.159 sys 0m0.447s 00:06:18.159 11:12:04 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.159 11:12:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.159 ************************************ 00:06:18.159 END TEST event_scheduler 00:06:18.159 ************************************ 00:06:18.159 11:12:04 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.159 11:12:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:18.159 11:12:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:18.159 11:12:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.159 11:12:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.159 11:12:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.159 ************************************ 00:06:18.159 START TEST app_repeat 00:06:18.159 ************************************ 00:06:18.159 11:12:04 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=739247 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 739247' 00:06:18.159 Process app_repeat pid: 739247 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:18.159 spdk_app_start Round 0 00:06:18.159 11:12:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 739247 /var/tmp/spdk-nbd.sock 00:06:18.159 11:12:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 739247 ']' 00:06:18.159 11:12:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.159 11:12:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.159 11:12:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.159 11:12:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.418 11:12:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.418 [2024-07-12 11:12:04.559554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:18.418 [2024-07-12 11:12:04.559645] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid739247 ] 00:06:18.418 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.418 [2024-07-12 11:12:04.666971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.677 [2024-07-12 11:12:04.891757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.677 [2024-07-12 11:12:04.891767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.244 11:12:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.244 11:12:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:19.244 11:12:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.244 Malloc0 00:06:19.503 11:12:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.503 Malloc1 00:06:19.503 11:12:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.503 11:12:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.761 /dev/nbd0 00:06:19.761 11:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.761 11:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.761 1+0 records in 00:06:19.761 1+0 records out 00:06:19.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180018 s, 22.8 MB/s 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.761 11:12:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.761 11:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.761 11:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.761 11:12:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.020 /dev/nbd1 00:06:20.020 11:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.020 11:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.020 1+0 records in 00:06:20.020 1+0 records out 00:06:20.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186653 s, 21.9 MB/s 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.020 11:12:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:20.020 11:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.020 11:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.020 11:12:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.020 11:12:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.020 11:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.278 { 00:06:20.278 "nbd_device": "/dev/nbd0", 00:06:20.278 "bdev_name": "Malloc0" 00:06:20.278 }, 00:06:20.278 { 00:06:20.278 "nbd_device": "/dev/nbd1", 00:06:20.278 "bdev_name": "Malloc1" 00:06:20.278 } 00:06:20.278 ]' 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.278 { 00:06:20.278 "nbd_device": "/dev/nbd0", 00:06:20.278 "bdev_name": "Malloc0" 00:06:20.278 }, 00:06:20.278 { 00:06:20.278 "nbd_device": "/dev/nbd1", 00:06:20.278 "bdev_name": "Malloc1" 00:06:20.278 } 00:06:20.278 ]' 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.278 /dev/nbd1' 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.278 /dev/nbd1' 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.278 11:12:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.279 256+0 records in 00:06:20.279 256+0 records out 00:06:20.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00324202 s, 323 MB/s 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.279 256+0 records in 00:06:20.279 256+0 records out 00:06:20.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157799 s, 66.4 MB/s 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.279 256+0 records in 00:06:20.279 256+0 records out 00:06:20.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189619 s, 55.3 MB/s 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.279 11:12:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.538 11:12:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.797 11:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.797 11:12:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.797 11:12:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.797 11:12:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.797 11:12:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.797 11:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.797 11:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.056 11:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.056 11:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.056 11:12:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.056 11:12:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.056 11:12:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.056 11:12:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.056 11:12:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.314 11:12:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.690 [2024-07-12 11:12:08.958458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.949 [2024-07-12 11:12:09.184752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.949 [2024-07-12 11:12:09.184752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.208 [2024-07-12 11:12:09.420600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.208 [2024-07-12 11:12:09.420657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.583 11:12:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.583 11:12:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:24.583 spdk_app_start Round 1 00:06:24.583 11:12:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 739247 /var/tmp/spdk-nbd.sock 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 739247 ']' 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.583 11:12:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:24.583 11:12:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.583 Malloc0 00:06:24.583 11:12:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.842 Malloc1 00:06:24.842 11:12:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.842 11:12:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.102 /dev/nbd0 00:06:25.102 11:12:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.102 11:12:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.102 1+0 records in 00:06:25.102 1+0 records out 00:06:25.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206768 s, 19.8 MB/s 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.102 11:12:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.102 11:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.102 11:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.102 11:12:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.361 /dev/nbd1 00:06:25.361 11:12:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.361 11:12:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.361 1+0 records in 00:06:25.361 1+0 records out 00:06:25.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219439 s, 18.7 MB/s 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.361 11:12:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.361 11:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.361 11:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.361 11:12:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.361 11:12:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.361 11:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.621 { 00:06:25.621 "nbd_device": "/dev/nbd0", 00:06:25.621 "bdev_name": "Malloc0" 00:06:25.621 }, 00:06:25.621 { 00:06:25.621 "nbd_device": "/dev/nbd1", 00:06:25.621 "bdev_name": "Malloc1" 00:06:25.621 } 00:06:25.621 ]' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.621 { 00:06:25.621 "nbd_device": "/dev/nbd0", 00:06:25.621 "bdev_name": "Malloc0" 00:06:25.621 }, 00:06:25.621 { 00:06:25.621 "nbd_device": "/dev/nbd1", 00:06:25.621 "bdev_name": "Malloc1" 00:06:25.621 } 00:06:25.621 ]' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.621 /dev/nbd1' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.621 /dev/nbd1' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.621 256+0 records in 00:06:25.621 256+0 records out 00:06:25.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100781 s, 104 MB/s 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.621 256+0 records in 00:06:25.621 256+0 records out 00:06:25.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159796 s, 65.6 MB/s 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.621 256+0 records in 00:06:25.621 256+0 records out 00:06:25.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020156 s, 52.0 MB/s 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.621 11:12:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.880 11:12:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.139 11:12:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.139 11:12:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.706 11:12:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.083 [2024-07-12 11:12:14.224758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.083 [2024-07-12 11:12:14.437176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.083 [2024-07-12 11:12:14.437181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.342 [2024-07-12 11:12:14.674543] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.342 [2024-07-12 11:12:14.674597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.718 11:12:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.718 11:12:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:29.718 spdk_app_start Round 2 00:06:29.718 11:12:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 739247 /var/tmp/spdk-nbd.sock 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 739247 ']' 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.718 11:12:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.718 11:12:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.977 Malloc0 00:06:29.977 11:12:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.235 Malloc1 00:06:30.235 11:12:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.235 11:12:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.495 /dev/nbd0 00:06:30.495 11:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.495 11:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.495 1+0 records in 00:06:30.495 1+0 records out 00:06:30.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242054 s, 16.9 MB/s 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.495 11:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.495 11:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.495 11:12:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.495 /dev/nbd1 00:06:30.495 11:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.495 11:12:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.495 11:12:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.754 1+0 records in 00:06:30.754 1+0 records out 00:06:30.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205463 s, 19.9 MB/s 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.754 11:12:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.754 11:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.754 11:12:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.754 11:12:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.754 11:12:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.754 11:12:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.754 { 00:06:30.754 "nbd_device": "/dev/nbd0", 00:06:30.754 "bdev_name": "Malloc0" 00:06:30.754 }, 00:06:30.754 { 00:06:30.754 "nbd_device": "/dev/nbd1", 00:06:30.754 "bdev_name": "Malloc1" 00:06:30.754 } 00:06:30.754 ]' 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.754 { 00:06:30.754 "nbd_device": "/dev/nbd0", 00:06:30.754 "bdev_name": "Malloc0" 00:06:30.754 }, 00:06:30.754 { 00:06:30.754 "nbd_device": "/dev/nbd1", 00:06:30.754 "bdev_name": "Malloc1" 00:06:30.754 } 00:06:30.754 ]' 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.754 /dev/nbd1' 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.754 /dev/nbd1' 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.754 11:12:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.755 256+0 records in 00:06:30.755 256+0 records out 00:06:30.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00972547 s, 108 MB/s 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.755 11:12:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.013 256+0 records in 00:06:31.013 256+0 records out 00:06:31.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162948 s, 64.4 MB/s 00:06:31.013 11:12:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.013 11:12:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.013 256+0 records in 00:06:31.013 256+0 records out 00:06:31.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190568 s, 55.0 MB/s 00:06:31.013 11:12:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.014 11:12:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.272 11:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.272 11:12:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.272 11:12:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.272 11:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.272 11:12:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.272 11:12:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.272 11:12:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.273 11:12:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.273 11:12:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.273 11:12:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.273 11:12:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.531 11:12:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.531 11:12:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.789 11:12:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.690 [2024-07-12 11:12:19.573118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.690 [2024-07-12 11:12:19.782805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.690 [2024-07-12 11:12:19.782805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.690 [2024-07-12 11:12:20.014892] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.690 [2024-07-12 11:12:20.014944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.067 11:12:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 739247 /var/tmp/spdk-nbd.sock 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 739247 ']' 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.067 11:12:21 event.app_repeat -- event/event.sh@39 -- # killprocess 739247 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 739247 ']' 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 739247 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 739247 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 739247' 00:06:35.067 killing process with pid 739247 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 739247 00:06:35.067 11:12:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 739247 00:06:36.439 spdk_app_start is called in Round 0. 00:06:36.439 Shutdown signal received, stop current app iteration 00:06:36.439 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:36.439 spdk_app_start is called in Round 1. 00:06:36.439 Shutdown signal received, stop current app iteration 00:06:36.439 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:36.439 spdk_app_start is called in Round 2. 00:06:36.439 Shutdown signal received, stop current app iteration 00:06:36.439 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:36.439 spdk_app_start is called in Round 3. 00:06:36.439 Shutdown signal received, stop current app iteration 00:06:36.439 11:12:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.439 11:12:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.439 00:06:36.439 real 0m18.106s 00:06:36.439 user 0m36.598s 00:06:36.439 sys 0m2.410s 00:06:36.439 11:12:22 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.439 11:12:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.439 ************************************ 00:06:36.439 END TEST app_repeat 00:06:36.439 ************************************ 00:06:36.439 11:12:22 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.439 11:12:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.439 11:12:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.439 11:12:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.439 11:12:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.439 11:12:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.439 ************************************ 00:06:36.439 START TEST cpu_locks 00:06:36.439 ************************************ 00:06:36.439 11:12:22 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.439 * Looking for test storage... 00:06:36.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.439 11:12:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.439 11:12:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.439 11:12:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.439 11:12:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.439 11:12:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.439 11:12:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.439 11:12:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.439 ************************************ 00:06:36.439 START TEST default_locks 00:06:36.439 ************************************ 00:06:36.439 11:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:36.439 11:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=743066 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 743066 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 743066 ']' 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.440 11:12:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.698 [2024-07-12 11:12:22.873332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:36.698 [2024-07-12 11:12:22.873430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743066 ] 00:06:36.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.698 [2024-07-12 11:12:22.974846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.955 [2024-07-12 11:12:23.184556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.888 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.888 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:37.888 11:12:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 743066 00:06:37.888 11:12:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.888 11:12:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 743066 00:06:38.454 lslocks: write error 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 743066 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 743066 ']' 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 743066 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 743066 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 743066' 00:06:38.454 killing process with pid 743066 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 743066 00:06:38.454 11:12:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 743066 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 743066 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 743066 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 743066 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 743066 ']' 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (743066) - No such process 00:06:40.986 ERROR: process (pid: 743066) is no longer running 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.986 00:06:40.986 real 0m4.314s 00:06:40.986 user 0m4.263s 00:06:40.986 sys 0m0.676s 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.986 11:12:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.986 ************************************ 00:06:40.986 END TEST default_locks 00:06:40.986 ************************************ 00:06:40.987 11:12:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.987 11:12:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.987 11:12:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.987 11:12:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.987 11:12:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.987 ************************************ 00:06:40.987 START TEST default_locks_via_rpc 00:06:40.987 ************************************ 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=743798 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 743798 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 743798 ']' 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.987 11:12:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.987 [2024-07-12 11:12:27.232495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:40.987 [2024-07-12 11:12:27.232589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743798 ] 00:06:40.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.987 [2024-07-12 11:12:27.331440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.245 [2024-07-12 11:12:27.541200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 743798 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 743798 00:06:42.181 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 743798 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 743798 ']' 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 743798 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 743798 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 743798' 00:06:42.440 killing process with pid 743798 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 743798 00:06:42.440 11:12:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 743798 00:06:44.972 00:06:44.972 real 0m3.976s 00:06:44.972 user 0m3.958s 00:06:44.972 sys 0m0.545s 00:06:44.972 11:12:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.972 11:12:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.972 ************************************ 00:06:44.972 END TEST default_locks_via_rpc 00:06:44.972 ************************************ 00:06:44.972 11:12:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.972 11:12:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:44.972 11:12:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.972 11:12:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.972 11:12:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.972 ************************************ 00:06:44.972 START TEST non_locking_app_on_locked_coremask 00:06:44.972 ************************************ 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=744533 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 744533 /var/tmp/spdk.sock 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 744533 ']' 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.972 11:12:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.972 [2024-07-12 11:12:31.278450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:44.972 [2024-07-12 11:12:31.278542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744533 ] 00:06:45.230 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.230 [2024-07-12 11:12:31.381999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.230 [2024-07-12 11:12:31.579426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=744765 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 744765 /var/tmp/spdk2.sock 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 744765 ']' 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.167 11:12:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.426 [2024-07-12 11:12:32.542048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:46.426 [2024-07-12 11:12:32.542139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744765 ] 00:06:46.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.426 [2024-07-12 11:12:32.676017] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.426 [2024-07-12 11:12:32.676071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.993 [2024-07-12 11:12:33.097876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.902 11:12:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.902 11:12:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.902 11:12:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 744533 00:06:48.902 11:12:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 744533 00:06:48.902 11:12:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.470 lslocks: write error 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 744533 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 744533 ']' 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 744533 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744533 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744533' 00:06:49.470 killing process with pid 744533 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 744533 00:06:49.470 11:12:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 744533 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 744765 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 744765 ']' 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 744765 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744765 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744765' 00:06:54.744 killing process with pid 744765 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 744765 00:06:54.744 11:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 744765 00:06:56.747 00:06:56.747 real 0m11.581s 00:06:56.747 user 0m11.719s 00:06:56.747 sys 0m1.214s 00:06:56.747 11:12:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.747 11:12:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.747 ************************************ 00:06:56.747 END TEST non_locking_app_on_locked_coremask 00:06:56.747 ************************************ 00:06:56.747 11:12:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.747 11:12:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:56.747 11:12:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.747 11:12:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.747 11:12:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.747 ************************************ 00:06:56.747 START TEST locking_app_on_unlocked_coremask 00:06:56.747 ************************************ 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=746423 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 746423 /var/tmp/spdk.sock 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 746423 ']' 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.747 11:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.747 [2024-07-12 11:12:42.902874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.747 [2024-07-12 11:12:42.902981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746423 ] 00:06:56.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.747 [2024-07-12 11:12:43.004470] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.747 [2024-07-12 11:12:43.004516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.049 [2024-07-12 11:12:43.227477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=746659 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 746659 /var/tmp/spdk2.sock 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 746659 ']' 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.986 11:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.986 [2024-07-12 11:12:44.194099] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.986 [2024-07-12 11:12:44.194189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746659 ] 00:06:57.986 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.986 [2024-07-12 11:12:44.335433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.554 [2024-07-12 11:12:44.752325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.459 11:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.459 11:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.459 11:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 746659 00:07:00.459 11:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 746659 00:07:00.459 11:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.029 lslocks: write error 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 746423 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 746423 ']' 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 746423 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 746423 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 746423' 00:07:01.029 killing process with pid 746423 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 746423 00:07:01.029 11:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 746423 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 746659 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 746659 ']' 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 746659 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 746659 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 746659' 00:07:06.304 killing process with pid 746659 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 746659 00:07:06.304 11:12:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 746659 00:07:08.205 00:07:08.205 real 0m11.538s 00:07:08.205 user 0m11.725s 00:07:08.205 sys 0m1.165s 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 ************************************ 00:07:08.205 END TEST locking_app_on_unlocked_coremask 00:07:08.205 ************************************ 00:07:08.205 11:12:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.205 11:12:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:08.205 11:12:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.205 11:12:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.205 11:12:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 ************************************ 00:07:08.205 START TEST locking_app_on_locked_coremask 00:07:08.205 ************************************ 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=748530 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 748530 /var/tmp/spdk.sock 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 748530 ']' 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.205 11:12:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 [2024-07-12 11:12:54.518009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.205 [2024-07-12 11:12:54.518103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748530 ] 00:07:08.464 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.464 [2024-07-12 11:12:54.619923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.722 [2024-07-12 11:12:54.828673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=748704 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 748704 /var/tmp/spdk2.sock 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 748704 /var/tmp/spdk2.sock 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 748704 /var/tmp/spdk2.sock 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 748704 ']' 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.705 11:12:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.705 [2024-07-12 11:12:55.808966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:09.705 [2024-07-12 11:12:55.809060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748704 ] 00:07:09.705 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.705 [2024-07-12 11:12:55.946962] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 748530 has claimed it. 00:07:09.705 [2024-07-12 11:12:55.947024] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (748704) - No such process 00:07:10.271 ERROR: process (pid: 748704) is no longer running 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 748530 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 748530 00:07:10.271 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.529 lslocks: write error 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 748530 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 748530 ']' 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 748530 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748530 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748530' 00:07:10.529 killing process with pid 748530 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 748530 00:07:10.529 11:12:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 748530 00:07:13.064 00:07:13.064 real 0m4.806s 00:07:13.064 user 0m4.913s 00:07:13.064 sys 0m0.794s 00:07:13.064 11:12:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.064 11:12:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.064 ************************************ 00:07:13.064 END TEST locking_app_on_locked_coremask 00:07:13.064 ************************************ 00:07:13.064 11:12:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.064 11:12:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.064 11:12:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.064 11:12:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.064 11:12:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.064 ************************************ 00:07:13.064 START TEST locking_overlapped_coremask 00:07:13.064 ************************************ 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=749260 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 749260 /var/tmp/spdk.sock 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 749260 ']' 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.064 11:12:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.064 [2024-07-12 11:12:59.375497] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:13.064 [2024-07-12 11:12:59.375590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749260 ] 00:07:13.323 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.323 [2024-07-12 11:12:59.478758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.582 [2024-07-12 11:12:59.696046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.582 [2024-07-12 11:12:59.696064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.582 [2024-07-12 11:12:59.696067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=749500 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 749500 /var/tmp/spdk2.sock 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 749500 /var/tmp/spdk2.sock 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 749500 /var/tmp/spdk2.sock 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 749500 ']' 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.517 11:13:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.517 [2024-07-12 11:13:00.723493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.517 [2024-07-12 11:13:00.723606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749500 ] 00:07:14.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.517 [2024-07-12 11:13:00.864606] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 749260 has claimed it. 00:07:14.517 [2024-07-12 11:13:00.864663] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (749500) - No such process 00:07:15.084 ERROR: process (pid: 749500) is no longer running 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 749260 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 749260 ']' 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 749260 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749260 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749260' 00:07:15.084 killing process with pid 749260 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 749260 00:07:15.084 11:13:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 749260 00:07:17.615 00:07:17.615 real 0m4.599s 00:07:17.615 user 0m12.202s 00:07:17.615 sys 0m0.594s 00:07:17.615 11:13:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.615 11:13:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.615 ************************************ 00:07:17.615 END TEST locking_overlapped_coremask 00:07:17.615 ************************************ 00:07:17.615 11:13:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.615 11:13:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:17.615 11:13:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.615 11:13:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.615 11:13:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.874 ************************************ 00:07:17.874 START TEST locking_overlapped_coremask_via_rpc 00:07:17.874 ************************************ 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=750150 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 750150 /var/tmp/spdk.sock 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 750150 ']' 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.874 11:13:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.874 [2024-07-12 11:13:04.042858] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.874 [2024-07-12 11:13:04.042947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750150 ] 00:07:17.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.874 [2024-07-12 11:13:04.141736] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.874 [2024-07-12 11:13:04.141781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.133 [2024-07-12 11:13:04.355927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.133 [2024-07-12 11:13:04.355997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.133 [2024-07-12 11:13:04.356002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=750349 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 750349 /var/tmp/spdk2.sock 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 750349 ']' 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.068 11:13:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.068 [2024-07-12 11:13:05.383256] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.068 [2024-07-12 11:13:05.383348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750349 ] 00:07:19.327 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.327 [2024-07-12 11:13:05.528513] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.327 [2024-07-12 11:13:05.528569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.895 [2024-07-12 11:13:05.982896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.895 [2024-07-12 11:13:05.982980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.895 [2024-07-12 11:13:05.983003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.799 [2024-07-12 11:13:07.851493] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 750150 has claimed it. 00:07:21.799 request: 00:07:21.799 { 00:07:21.799 "method": "framework_enable_cpumask_locks", 00:07:21.799 "req_id": 1 00:07:21.799 } 00:07:21.799 Got JSON-RPC error response 00:07:21.799 response: 00:07:21.799 { 00:07:21.799 "code": -32603, 00:07:21.799 "message": "Failed to claim CPU core: 2" 00:07:21.799 } 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 750150 /var/tmp/spdk.sock 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 750150 ']' 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.799 11:13:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 750349 /var/tmp/spdk2.sock 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 750349 ']' 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.799 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.058 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.059 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.059 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.059 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.059 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.059 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.059 00:07:22.059 real 0m4.263s 00:07:22.059 user 0m1.013s 00:07:22.059 sys 0m0.205s 00:07:22.059 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.059 11:13:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.059 ************************************ 00:07:22.059 END TEST locking_overlapped_coremask_via_rpc 00:07:22.059 ************************************ 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:22.059 11:13:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.059 11:13:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 750150 ]] 00:07:22.059 11:13:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 750150 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 750150 ']' 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 750150 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 750150 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 750150' 00:07:22.059 killing process with pid 750150 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 750150 00:07:22.059 11:13:08 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 750150 00:07:25.348 11:13:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 750349 ]] 00:07:25.348 11:13:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 750349 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 750349 ']' 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 750349 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 750349 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 750349' 00:07:25.348 killing process with pid 750349 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 750349 00:07:25.348 11:13:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 750349 00:07:27.253 11:13:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.253 11:13:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:27.253 11:13:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 750150 ]] 00:07:27.253 11:13:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 750150 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 750150 ']' 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 750150 00:07:27.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (750150) - No such process 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 750150 is not found' 00:07:27.253 Process with pid 750150 is not found 00:07:27.253 11:13:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 750349 ]] 00:07:27.253 11:13:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 750349 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 750349 ']' 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 750349 00:07:27.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (750349) - No such process 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 750349 is not found' 00:07:27.253 Process with pid 750349 is not found 00:07:27.253 11:13:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.253 00:07:27.253 real 0m50.877s 00:07:27.253 user 1m25.959s 00:07:27.253 sys 0m6.314s 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.253 11:13:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.253 ************************************ 00:07:27.253 END TEST cpu_locks 00:07:27.253 ************************************ 00:07:27.253 11:13:13 event -- common/autotest_common.sh@1142 -- # return 0 00:07:27.253 00:07:27.253 real 1m20.590s 00:07:27.253 user 2m20.372s 00:07:27.253 sys 0m9.922s 00:07:27.253 11:13:13 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.253 11:13:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.253 ************************************ 00:07:27.253 END TEST event 00:07:27.253 ************************************ 00:07:27.512 11:13:13 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.512 11:13:13 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:27.512 11:13:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.512 11:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.512 11:13:13 -- common/autotest_common.sh@10 -- # set +x 00:07:27.512 ************************************ 00:07:27.512 START TEST thread 00:07:27.512 ************************************ 00:07:27.512 11:13:13 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:27.512 * Looking for test storage... 00:07:27.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:27.512 11:13:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.512 11:13:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:27.512 11:13:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.512 11:13:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.512 ************************************ 00:07:27.512 START TEST thread_poller_perf 00:07:27.512 ************************************ 00:07:27.512 11:13:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.512 [2024-07-12 11:13:13.799223] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.512 [2024-07-12 11:13:13.799315] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751925 ] 00:07:27.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.771 [2024-07-12 11:13:13.900162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.771 [2024-07-12 11:13:14.108463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.771 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:29.148 ====================================== 00:07:29.148 busy:2310655692 (cyc) 00:07:29.148 total_run_count: 396000 00:07:29.148 tsc_hz: 2300000000 (cyc) 00:07:29.148 ====================================== 00:07:29.148 poller_cost: 5834 (cyc), 2536 (nsec) 00:07:29.407 00:07:29.407 real 0m1.748s 00:07:29.407 user 0m1.614s 00:07:29.407 sys 0m0.127s 00:07:29.407 11:13:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.407 11:13:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.407 ************************************ 00:07:29.407 END TEST thread_poller_perf 00:07:29.407 ************************************ 00:07:29.407 11:13:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:29.407 11:13:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.407 11:13:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.407 11:13:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.407 11:13:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.407 ************************************ 00:07:29.407 START TEST thread_poller_perf 00:07:29.407 ************************************ 00:07:29.407 11:13:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.407 [2024-07-12 11:13:15.620052] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:29.407 [2024-07-12 11:13:15.620147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752183 ] 00:07:29.407 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.407 [2024-07-12 11:13:15.722569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.666 [2024-07-12 11:13:15.933197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.666 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:31.044 ====================================== 00:07:31.044 busy:2302378180 (cyc) 00:07:31.044 total_run_count: 5267000 00:07:31.044 tsc_hz: 2300000000 (cyc) 00:07:31.044 ====================================== 00:07:31.044 poller_cost: 437 (cyc), 190 (nsec) 00:07:31.044 00:07:31.044 real 0m1.766s 00:07:31.044 user 0m1.631s 00:07:31.044 sys 0m0.128s 00:07:31.044 11:13:17 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.044 11:13:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.044 ************************************ 00:07:31.044 END TEST thread_poller_perf 00:07:31.044 ************************************ 00:07:31.044 11:13:17 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:31.044 11:13:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:31.044 00:07:31.044 real 0m3.729s 00:07:31.044 user 0m3.339s 00:07:31.044 sys 0m0.393s 00:07:31.044 11:13:17 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.044 11:13:17 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.044 ************************************ 00:07:31.044 END TEST thread 00:07:31.044 ************************************ 00:07:31.303 11:13:17 -- common/autotest_common.sh@1142 -- # return 0 00:07:31.303 11:13:17 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:31.303 11:13:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.303 11:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.303 11:13:17 -- common/autotest_common.sh@10 -- # set +x 00:07:31.303 ************************************ 00:07:31.303 START TEST accel 00:07:31.303 ************************************ 00:07:31.303 11:13:17 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:31.303 * Looking for test storage... 00:07:31.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:31.303 11:13:17 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:31.303 11:13:17 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:31.303 11:13:17 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:31.303 11:13:17 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=752506 00:07:31.303 11:13:17 accel -- accel/accel.sh@63 -- # waitforlisten 752506 00:07:31.303 11:13:17 accel -- common/autotest_common.sh@829 -- # '[' -z 752506 ']' 00:07:31.303 11:13:17 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.303 11:13:17 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:31.303 11:13:17 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:31.303 11:13:17 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.303 11:13:17 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.303 11:13:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.303 11:13:17 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.303 11:13:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.303 11:13:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.303 11:13:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.303 11:13:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.303 11:13:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.303 11:13:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:31.303 11:13:17 accel -- accel/accel.sh@41 -- # jq -r . 00:07:31.303 [2024-07-12 11:13:17.616787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:31.303 [2024-07-12 11:13:17.616902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752506 ] 00:07:31.562 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.562 [2024-07-12 11:13:17.721747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.820 [2024-07-12 11:13:17.941198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@862 -- # return 0 00:07:32.757 11:13:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:32.757 11:13:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:32.757 11:13:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:32.757 11:13:18 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:32.757 11:13:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:32.757 11:13:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:32.757 11:13:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.757 11:13:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.757 11:13:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.757 11:13:18 accel -- accel/accel.sh@75 -- # killprocess 752506 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@948 -- # '[' -z 752506 ']' 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@952 -- # kill -0 752506 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@953 -- # uname 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 752506 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 752506' 00:07:32.757 killing process with pid 752506 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@967 -- # kill 752506 00:07:32.757 11:13:18 accel -- common/autotest_common.sh@972 -- # wait 752506 00:07:35.289 11:13:21 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:35.289 11:13:21 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:35.289 11:13:21 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.289 11:13:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.289 11:13:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.289 11:13:21 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:35.289 11:13:21 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:35.289 11:13:21 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.289 11:13:21 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:35.289 11:13:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.289 11:13:21 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:35.289 11:13:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.289 11:13:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.289 11:13:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.289 ************************************ 00:07:35.289 START TEST accel_missing_filename 00:07:35.289 ************************************ 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.289 11:13:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:35.289 11:13:21 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:35.289 [2024-07-12 11:13:21.550015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.289 [2024-07-12 11:13:21.550102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753206 ] 00:07:35.289 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.548 [2024-07-12 11:13:21.653052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.548 [2024-07-12 11:13:21.881310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.807 [2024-07-12 11:13:22.123062] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.374 [2024-07-12 11:13:22.653325] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:36.942 A filename is required. 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.942 00:07:36.942 real 0m1.576s 00:07:36.942 user 0m1.421s 00:07:36.942 sys 0m0.189s 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.942 11:13:23 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:36.942 ************************************ 00:07:36.942 END TEST accel_missing_filename 00:07:36.942 ************************************ 00:07:36.942 11:13:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.942 11:13:23 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.942 11:13:23 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:36.942 11:13:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.942 11:13:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.942 ************************************ 00:07:36.942 START TEST accel_compress_verify 00:07:36.942 ************************************ 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.942 11:13:23 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:36.942 11:13:23 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:36.942 [2024-07-12 11:13:23.190931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:36.942 [2024-07-12 11:13:23.191010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753583 ] 00:07:36.942 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.942 [2024-07-12 11:13:23.291655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.201 [2024-07-12 11:13:23.504547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.460 [2024-07-12 11:13:23.738909] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.027 [2024-07-12 11:13:24.290852] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:38.596 00:07:38.596 Compression does not support the verify option, aborting. 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.596 00:07:38.596 real 0m1.578s 00:07:38.596 user 0m1.422s 00:07:38.596 sys 0m0.192s 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.596 11:13:24 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 ************************************ 00:07:38.596 END TEST accel_compress_verify 00:07:38.596 ************************************ 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.596 11:13:24 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 ************************************ 00:07:38.596 START TEST accel_wrong_workload 00:07:38.596 ************************************ 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:38.596 11:13:24 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:38.596 Unsupported workload type: foobar 00:07:38.596 [2024-07-12 11:13:24.832311] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:38.596 accel_perf options: 00:07:38.596 [-h help message] 00:07:38.596 [-q queue depth per core] 00:07:38.596 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.596 [-T number of threads per core 00:07:38.596 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.596 [-t time in seconds] 00:07:38.596 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.596 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:38.596 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.596 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.596 [-S for crc32c workload, use this seed value (default 0) 00:07:38.596 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.596 [-f for fill workload, use this BYTE value (default 255) 00:07:38.596 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.596 [-y verify result if this switch is on] 00:07:38.596 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.596 Can be used to spread operations across a wider range of memory. 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.596 00:07:38.596 real 0m0.071s 00:07:38.596 user 0m0.078s 00:07:38.596 sys 0m0.032s 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.596 11:13:24 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 ************************************ 00:07:38.596 END TEST accel_wrong_workload 00:07:38.596 ************************************ 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.596 11:13:24 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.596 11:13:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 ************************************ 00:07:38.596 START TEST accel_negative_buffers 00:07:38.596 ************************************ 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.596 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:38.596 11:13:24 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:38.856 -x option must be non-negative. 00:07:38.856 [2024-07-12 11:13:24.971361] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:38.856 accel_perf options: 00:07:38.856 [-h help message] 00:07:38.856 [-q queue depth per core] 00:07:38.856 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:38.856 [-T number of threads per core 00:07:38.856 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:38.856 [-t time in seconds] 00:07:38.856 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:38.856 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:38.856 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:38.856 [-l for compress/decompress workloads, name of uncompressed input file 00:07:38.856 [-S for crc32c workload, use this seed value (default 0) 00:07:38.856 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:38.856 [-f for fill workload, use this BYTE value (default 255) 00:07:38.856 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:38.856 [-y verify result if this switch is on] 00:07:38.856 [-a tasks to allocate per core (default: same value as -q)] 00:07:38.856 Can be used to spread operations across a wider range of memory. 00:07:38.856 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:38.856 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.856 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.856 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.856 00:07:38.856 real 0m0.071s 00:07:38.856 user 0m0.078s 00:07:38.856 sys 0m0.037s 00:07:38.856 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.856 11:13:24 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:38.856 ************************************ 00:07:38.856 END TEST accel_negative_buffers 00:07:38.856 ************************************ 00:07:38.856 11:13:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.856 11:13:25 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:38.856 11:13:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:38.856 11:13:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.856 11:13:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.856 ************************************ 00:07:38.856 START TEST accel_crc32c 00:07:38.856 ************************************ 00:07:38.856 11:13:25 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:38.856 11:13:25 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:38.856 [2024-07-12 11:13:25.106085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:38.856 [2024-07-12 11:13:25.106183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753974 ] 00:07:38.856 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.856 [2024-07-12 11:13:25.211036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.115 [2024-07-12 11:13:25.418285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.374 11:13:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:41.278 11:13:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.278 00:07:41.278 real 0m2.558s 00:07:41.278 user 0m2.387s 00:07:41.278 sys 0m0.186s 00:07:41.278 11:13:27 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.278 11:13:27 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:41.278 ************************************ 00:07:41.278 END TEST accel_crc32c 00:07:41.278 ************************************ 00:07:41.537 11:13:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.537 11:13:27 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:41.537 11:13:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:41.537 11:13:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.537 11:13:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.537 ************************************ 00:07:41.537 START TEST accel_crc32c_C2 00:07:41.537 ************************************ 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.537 11:13:27 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:41.537 [2024-07-12 11:13:27.731969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.537 [2024-07-12 11:13:27.732050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754447 ] 00:07:41.537 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.538 [2024-07-12 11:13:27.833031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.796 [2024-07-12 11:13:28.044434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.055 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:42.056 11:13:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.957 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.958 00:07:43.958 real 0m2.564s 00:07:43.958 user 0m2.393s 00:07:43.958 sys 0m0.183s 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.958 11:13:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:43.958 ************************************ 00:07:43.958 END TEST accel_crc32c_C2 00:07:43.958 ************************************ 00:07:43.958 11:13:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.958 11:13:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:43.958 11:13:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:43.958 11:13:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.958 11:13:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.216 ************************************ 00:07:44.216 START TEST accel_copy 00:07:44.216 ************************************ 00:07:44.216 11:13:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:44.216 11:13:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:44.216 [2024-07-12 11:13:30.360728] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.216 [2024-07-12 11:13:30.360813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754929 ] 00:07:44.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.216 [2024-07-12 11:13:30.460015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.476 [2024-07-12 11:13:30.673646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.735 11:13:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:44.736 11:13:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.736 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.736 11:13:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:46.639 11:13:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.639 00:07:46.639 real 0m2.545s 00:07:46.639 user 0m2.360s 00:07:46.639 sys 0m0.197s 00:07:46.639 11:13:32 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.639 11:13:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.639 ************************************ 00:07:46.639 END TEST accel_copy 00:07:46.639 ************************************ 00:07:46.639 11:13:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.639 11:13:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.639 11:13:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:46.639 11:13:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.639 11:13:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.639 ************************************ 00:07:46.639 START TEST accel_fill 00:07:46.639 ************************************ 00:07:46.639 11:13:32 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:46.639 11:13:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:46.639 [2024-07-12 11:13:32.974819] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:46.639 [2024-07-12 11:13:32.974903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755402 ] 00:07:46.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.898 [2024-07-12 11:13:33.075621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.156 [2024-07-12 11:13:33.284660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.415 11:13:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:49.320 11:13:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.320 00:07:49.320 real 0m2.550s 00:07:49.320 user 0m2.369s 00:07:49.320 sys 0m0.193s 00:07:49.320 11:13:35 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.320 11:13:35 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:49.320 ************************************ 00:07:49.320 END TEST accel_fill 00:07:49.320 ************************************ 00:07:49.320 11:13:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.320 11:13:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:49.321 11:13:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.321 11:13:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.321 11:13:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.321 ************************************ 00:07:49.321 START TEST accel_copy_crc32c 00:07:49.321 ************************************ 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:49.321 11:13:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:49.321 [2024-07-12 11:13:35.592098] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:49.321 [2024-07-12 11:13:35.592178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755798 ] 00:07:49.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.579 [2024-07-12 11:13:35.692205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.579 [2024-07-12 11:13:35.904681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.838 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.838 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.839 11:13:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.741 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.741 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.741 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.741 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.741 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.742 00:07:51.742 real 0m2.536s 00:07:51.742 user 0m2.357s 00:07:51.742 sys 0m0.193s 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.742 11:13:38 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:51.742 ************************************ 00:07:51.742 END TEST accel_copy_crc32c 00:07:51.742 ************************************ 00:07:52.001 11:13:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.001 11:13:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.001 11:13:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:52.001 11:13:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.001 11:13:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.001 ************************************ 00:07:52.001 START TEST accel_copy_crc32c_C2 00:07:52.001 ************************************ 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.001 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:52.001 [2024-07-12 11:13:38.167325] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.001 [2024-07-12 11:13:38.167494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756165 ] 00:07:52.001 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.001 [2024-07-12 11:13:38.265575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.260 [2024-07-12 11:13:38.470231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.518 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.518 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.518 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.518 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:52.519 11:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.423 00:07:54.423 real 0m2.519s 00:07:54.423 user 0m2.358s 00:07:54.423 sys 0m0.175s 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.423 11:13:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:54.423 ************************************ 00:07:54.423 END TEST accel_copy_crc32c_C2 00:07:54.423 ************************************ 00:07:54.423 11:13:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.423 11:13:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:54.423 11:13:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:54.423 11:13:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.423 11:13:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.423 ************************************ 00:07:54.423 START TEST accel_dualcast 00:07:54.423 ************************************ 00:07:54.423 11:13:40 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:54.423 11:13:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:54.423 [2024-07-12 11:13:40.754103] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.423 [2024-07-12 11:13:40.754185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756623 ] 00:07:54.682 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.682 [2024-07-12 11:13:40.848751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.940 [2024-07-12 11:13:41.062487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:54.940 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:55.198 11:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:57.100 11:13:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.100 00:07:57.100 real 0m2.526s 00:07:57.100 user 0m2.364s 00:07:57.100 sys 0m0.174s 00:07:57.100 11:13:43 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.100 11:13:43 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:57.100 ************************************ 00:07:57.100 END TEST accel_dualcast 00:07:57.100 ************************************ 00:07:57.100 11:13:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.100 11:13:43 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:57.100 11:13:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:57.100 11:13:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.100 11:13:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.100 ************************************ 00:07:57.100 START TEST accel_compare 00:07:57.100 ************************************ 00:07:57.100 11:13:43 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:57.100 11:13:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:57.100 [2024-07-12 11:13:43.347716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:57.101 [2024-07-12 11:13:43.347799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757105 ] 00:07:57.101 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.101 [2024-07-12 11:13:43.447598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.359 [2024-07-12 11:13:43.658761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:57.618 11:13:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.519 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:59.777 11:13:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.777 00:07:59.777 real 0m2.568s 00:07:59.777 user 0m2.390s 00:07:59.777 sys 0m0.179s 00:07:59.777 11:13:45 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.777 11:13:45 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:59.777 ************************************ 00:07:59.777 END TEST accel_compare 00:07:59.777 ************************************ 00:07:59.777 11:13:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:59.777 11:13:45 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:59.777 11:13:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:59.777 11:13:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.777 11:13:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:59.777 ************************************ 00:07:59.777 START TEST accel_xor 00:07:59.777 ************************************ 00:07:59.777 11:13:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:59.777 11:13:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:59.777 11:13:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:59.777 11:13:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:59.777 11:13:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:59.777 11:13:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:59.778 11:13:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:59.778 [2024-07-12 11:13:45.979632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:59.778 [2024-07-12 11:13:45.979716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757578 ] 00:07:59.778 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.778 [2024-07-12 11:13:46.078847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.037 [2024-07-12 11:13:46.295696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.296 11:13:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:02.198 11:13:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.198 00:08:02.198 real 0m2.603s 00:08:02.198 user 0m2.422s 00:08:02.198 sys 0m0.193s 00:08:02.198 11:13:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.198 11:13:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:02.198 ************************************ 00:08:02.198 END TEST accel_xor 00:08:02.198 ************************************ 00:08:02.458 11:13:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.458 11:13:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:02.458 11:13:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:02.458 11:13:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.458 11:13:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.458 ************************************ 00:08:02.458 START TEST accel_xor 00:08:02.458 ************************************ 00:08:02.458 11:13:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:02.458 11:13:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:02.458 [2024-07-12 11:13:48.648401] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:02.458 [2024-07-12 11:13:48.648487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758060 ] 00:08:02.458 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.458 [2024-07-12 11:13:48.748415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.717 [2024-07-12 11:13:48.952353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:02.975 11:13:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:04.878 11:13:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.878 00:08:04.878 real 0m2.592s 00:08:04.878 user 0m2.415s 00:08:04.878 sys 0m0.188s 00:08:04.878 11:13:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.878 11:13:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:04.878 ************************************ 00:08:04.878 END TEST accel_xor 00:08:04.878 ************************************ 00:08:04.878 11:13:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.878 11:13:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:04.878 11:13:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:04.878 11:13:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.878 11:13:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.137 ************************************ 00:08:05.137 START TEST accel_dif_verify 00:08:05.137 ************************************ 00:08:05.137 11:13:51 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:05.137 11:13:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:05.137 [2024-07-12 11:13:51.296813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.137 [2024-07-12 11:13:51.296897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758534 ] 00:08:05.137 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.137 [2024-07-12 11:13:51.390966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.396 [2024-07-12 11:13:51.606452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:05.655 11:13:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:07.558 11:13:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.558 00:08:07.558 real 0m2.524s 00:08:07.558 user 0m2.361s 00:08:07.558 sys 0m0.178s 00:08:07.558 11:13:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.558 11:13:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:07.558 ************************************ 00:08:07.558 END TEST accel_dif_verify 00:08:07.558 ************************************ 00:08:07.558 11:13:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.558 11:13:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:07.558 11:13:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:07.558 11:13:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.558 11:13:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.558 ************************************ 00:08:07.558 START TEST accel_dif_generate 00:08:07.558 ************************************ 00:08:07.558 11:13:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:07.558 11:13:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:07.558 [2024-07-12 11:13:53.888328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.558 [2024-07-12 11:13:53.888413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759016 ] 00:08:07.817 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.817 [2024-07-12 11:13:53.982131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.075 [2024-07-12 11:13:54.193096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.075 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:08.076 11:13:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:10.607 11:13:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.607 00:08:10.607 real 0m2.531s 00:08:10.607 user 0m2.373s 00:08:10.607 sys 0m0.172s 00:08:10.607 11:13:56 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.607 11:13:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:10.607 ************************************ 00:08:10.607 END TEST accel_dif_generate 00:08:10.607 ************************************ 00:08:10.607 11:13:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.607 11:13:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:10.607 11:13:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:10.607 11:13:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.607 11:13:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.607 ************************************ 00:08:10.607 START TEST accel_dif_generate_copy 00:08:10.607 ************************************ 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:10.607 11:13:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:10.607 [2024-07-12 11:13:56.502365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:10.607 [2024-07-12 11:13:56.502459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759487 ] 00:08:10.607 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.607 [2024-07-12 11:13:56.604807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.607 [2024-07-12 11:13:56.809033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.865 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.866 11:13:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.773 00:08:12.773 real 0m2.557s 00:08:12.773 user 0m2.384s 00:08:12.773 sys 0m0.186s 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.773 11:13:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.773 ************************************ 00:08:12.773 END TEST accel_dif_generate_copy 00:08:12.773 ************************************ 00:08:12.773 11:13:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.773 11:13:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:12.773 11:13:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.773 11:13:59 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:12.773 11:13:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.773 11:13:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.773 ************************************ 00:08:12.773 START TEST accel_comp 00:08:12.773 ************************************ 00:08:12.773 11:13:59 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.773 11:13:59 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:12.773 11:13:59 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:12.773 11:13:59 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.773 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:12.774 11:13:59 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:12.774 [2024-07-12 11:13:59.113159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.774 [2024-07-12 11:13:59.113242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759966 ] 00:08:13.033 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.033 [2024-07-12 11:13:59.213480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.292 [2024-07-12 11:13:59.421352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:13.551 11:13:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.458 11:14:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:15.459 11:14:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.459 00:08:15.459 real 0m2.537s 00:08:15.459 user 0m2.373s 00:08:15.459 sys 0m0.179s 00:08:15.459 11:14:01 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.459 11:14:01 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:15.459 ************************************ 00:08:15.459 END TEST accel_comp 00:08:15.459 ************************************ 00:08:15.459 11:14:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.459 11:14:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.459 11:14:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:15.459 11:14:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.459 11:14:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.459 ************************************ 00:08:15.459 START TEST accel_decomp 00:08:15.459 ************************************ 00:08:15.459 11:14:01 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:15.459 11:14:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:15.459 [2024-07-12 11:14:01.730851] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:15.459 [2024-07-12 11:14:01.730936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760361 ] 00:08:15.459 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.717 [2024-07-12 11:14:01.831899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.717 [2024-07-12 11:14:02.040749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.975 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.976 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.976 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.976 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:15.976 11:14:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:15.976 11:14:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:15.976 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:15.976 11:14:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.026 11:14:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.026 00:08:18.026 real 0m2.554s 00:08:18.026 user 0m2.392s 00:08:18.026 sys 0m0.176s 00:08:18.026 11:14:04 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.026 11:14:04 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:18.026 ************************************ 00:08:18.026 END TEST accel_decomp 00:08:18.026 ************************************ 00:08:18.026 11:14:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.026 11:14:04 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.026 11:14:04 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:18.026 11:14:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.026 11:14:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.026 ************************************ 00:08:18.026 START TEST accel_decomp_full 00:08:18.026 ************************************ 00:08:18.026 11:14:04 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:18.026 11:14:04 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:18.026 [2024-07-12 11:14:04.337876] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:18.026 [2024-07-12 11:14:04.337957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760742 ] 00:08:18.347 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.347 [2024-07-12 11:14:04.444192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.347 [2024-07-12 11:14:04.649537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.606 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.606 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:18.607 11:14:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.542 11:14:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.542 00:08:20.542 real 0m2.578s 00:08:20.542 user 0m2.410s 00:08:20.542 sys 0m0.181s 00:08:20.542 11:14:06 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.542 11:14:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:20.542 ************************************ 00:08:20.542 END TEST accel_decomp_full 00:08:20.542 ************************************ 00:08:20.801 11:14:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.801 11:14:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:20.801 11:14:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:20.801 11:14:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.801 11:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.801 ************************************ 00:08:20.801 START TEST accel_decomp_mcore 00:08:20.801 ************************************ 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:20.801 11:14:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:20.801 [2024-07-12 11:14:06.985069] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:20.801 [2024-07-12 11:14:06.985152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761201 ] 00:08:20.801 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.801 [2024-07-12 11:14:07.087213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.060 [2024-07-12 11:14:07.322053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.060 [2024-07-12 11:14:07.322081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.060 [2024-07-12 11:14:07.322101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.060 [2024-07-12 11:14:07.322108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.318 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:21.319 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:21.319 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:21.319 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:21.319 11:14:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.851 00:08:23.851 real 0m2.675s 00:08:23.851 user 0m8.080s 00:08:23.851 sys 0m0.204s 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.851 11:14:09 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:23.851 ************************************ 00:08:23.851 END TEST accel_decomp_mcore 00:08:23.851 ************************************ 00:08:23.851 11:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.851 11:14:09 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.851 11:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:23.851 11:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.851 11:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.851 ************************************ 00:08:23.851 START TEST accel_decomp_full_mcore 00:08:23.851 ************************************ 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:23.851 11:14:09 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:23.851 [2024-07-12 11:14:09.697810] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:23.852 [2024-07-12 11:14:09.697888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761677 ] 00:08:23.852 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.852 [2024-07-12 11:14:09.799233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.852 [2024-07-12 11:14:10.018197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.852 [2024-07-12 11:14:10.018269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.852 [2024-07-12 11:14:10.018374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.852 [2024-07-12 11:14:10.018393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:24.110 11:14:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.014 00:08:26.014 real 0m2.673s 00:08:26.014 user 0m8.186s 00:08:26.014 sys 0m0.194s 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.014 11:14:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:26.014 ************************************ 00:08:26.014 END TEST accel_decomp_full_mcore 00:08:26.014 ************************************ 00:08:26.274 11:14:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:26.274 11:14:12 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.274 11:14:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:26.274 11:14:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.274 11:14:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.274 ************************************ 00:08:26.274 START TEST accel_decomp_mthread 00:08:26.274 ************************************ 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:26.274 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:26.274 [2024-07-12 11:14:12.441173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:26.274 [2024-07-12 11:14:12.441270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762162 ] 00:08:26.274 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.274 [2024-07-12 11:14:12.536054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.533 [2024-07-12 11:14:12.762470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.792 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.792 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.792 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.792 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:26.793 11:14:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:26.793 11:14:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.698 00:08:28.698 real 0m2.565s 00:08:28.698 user 0m2.418s 00:08:28.698 sys 0m0.161s 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.698 11:14:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:28.698 ************************************ 00:08:28.698 END TEST accel_decomp_mthread 00:08:28.698 ************************************ 00:08:28.698 11:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.698 11:14:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.698 11:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:28.698 11:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.698 11:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.698 ************************************ 00:08:28.698 START TEST accel_decomp_full_mthread 00:08:28.698 ************************************ 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:28.698 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:28.956 [2024-07-12 11:14:15.064468] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:28.957 [2024-07-12 11:14:15.064564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762641 ] 00:08:28.957 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.957 [2024-07-12 11:14:15.163365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.215 [2024-07-12 11:14:15.376367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:29.474 11:14:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.379 00:08:31.379 real 0m2.601s 00:08:31.379 user 0m2.430s 00:08:31.379 sys 0m0.185s 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.379 11:14:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:31.379 ************************************ 00:08:31.379 END TEST accel_decomp_full_mthread 00:08:31.379 ************************************ 00:08:31.379 11:14:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.379 11:14:17 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:31.379 11:14:17 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.379 11:14:17 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:31.379 11:14:17 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.379 11:14:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.379 11:14:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.379 11:14:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.379 11:14:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.379 11:14:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.379 11:14:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.379 11:14:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.379 11:14:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:31.379 11:14:17 accel -- accel/accel.sh@41 -- # jq -r . 00:08:31.379 ************************************ 00:08:31.379 START TEST accel_dif_functional_tests 00:08:31.379 ************************************ 00:08:31.379 11:14:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.639 [2024-07-12 11:14:17.747107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.639 [2024-07-12 11:14:17.747197] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763118 ] 00:08:31.639 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.639 [2024-07-12 11:14:17.846995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.899 [2024-07-12 11:14:18.063535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.899 [2024-07-12 11:14:18.063603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.899 [2024-07-12 11:14:18.063609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.158 00:08:32.158 00:08:32.158 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.158 http://cunit.sourceforge.net/ 00:08:32.158 00:08:32.158 00:08:32.158 Suite: accel_dif 00:08:32.158 Test: verify: DIF generated, GUARD check ...passed 00:08:32.158 Test: verify: DIF generated, APPTAG check ...passed 00:08:32.158 Test: verify: DIF generated, REFTAG check ...passed 00:08:32.158 Test: verify: DIF not generated, GUARD check ...[2024-07-12 11:14:18.437571] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:32.158 passed 00:08:32.158 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 11:14:18.437654] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:32.158 passed 00:08:32.158 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 11:14:18.437688] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:32.158 passed 00:08:32.158 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:32.158 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 11:14:18.437762] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:32.158 passed 00:08:32.158 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:32.158 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:32.158 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:32.158 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 11:14:18.437911] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:32.158 passed 00:08:32.158 Test: verify copy: DIF generated, GUARD check ...passed 00:08:32.158 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:32.158 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:32.158 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 11:14:18.438095] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:32.158 passed 00:08:32.158 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 11:14:18.438141] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:32.158 passed 00:08:32.158 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 11:14:18.438180] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:32.158 passed 00:08:32.158 Test: generate copy: DIF generated, GUARD check ...passed 00:08:32.158 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:32.158 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:32.158 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:32.158 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:32.158 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:32.158 Test: generate copy: iovecs-len validate ...[2024-07-12 11:14:18.438476] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:32.158 passed 00:08:32.158 Test: generate copy: buffer alignment validate ...passed 00:08:32.158 00:08:32.158 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.158 suites 1 1 n/a 0 0 00:08:32.158 tests 26 26 26 0 0 00:08:32.158 asserts 115 115 115 0 n/a 00:08:32.158 00:08:32.158 Elapsed time = 0.003 seconds 00:08:33.537 00:08:33.537 real 0m2.049s 00:08:33.537 user 0m4.319s 00:08:33.537 sys 0m0.223s 00:08:33.537 11:14:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.537 11:14:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:33.537 ************************************ 00:08:33.537 END TEST accel_dif_functional_tests 00:08:33.537 ************************************ 00:08:33.537 11:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.537 00:08:33.537 real 1m2.324s 00:08:33.537 user 1m11.092s 00:08:33.537 sys 0m5.997s 00:08:33.537 11:14:19 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.537 11:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.537 ************************************ 00:08:33.537 END TEST accel 00:08:33.537 ************************************ 00:08:33.537 11:14:19 -- common/autotest_common.sh@1142 -- # return 0 00:08:33.537 11:14:19 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:33.537 11:14:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:33.537 11:14:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.537 11:14:19 -- common/autotest_common.sh@10 -- # set +x 00:08:33.537 ************************************ 00:08:33.537 START TEST accel_rpc 00:08:33.537 ************************************ 00:08:33.537 11:14:19 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:33.796 * Looking for test storage... 00:08:33.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:33.796 11:14:19 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:33.796 11:14:19 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=763636 00:08:33.796 11:14:19 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 763636 00:08:33.796 11:14:19 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:33.796 11:14:19 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 763636 ']' 00:08:33.796 11:14:19 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.796 11:14:19 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.796 11:14:19 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.796 11:14:19 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.796 11:14:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.796 [2024-07-12 11:14:20.006211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:33.796 [2024-07-12 11:14:20.006308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763636 ] 00:08:33.796 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.796 [2024-07-12 11:14:20.110843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.055 [2024-07-12 11:14:20.310469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.622 11:14:20 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.622 11:14:20 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:34.622 11:14:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:34.622 11:14:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:34.622 11:14:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:34.622 11:14:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:34.622 11:14:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:34.622 11:14:20 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.622 11:14:20 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.623 11:14:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.623 ************************************ 00:08:34.623 START TEST accel_assign_opcode 00:08:34.623 ************************************ 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:34.623 [2024-07-12 11:14:20.800247] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:34.623 [2024-07-12 11:14:20.812259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.623 11:14:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.559 software 00:08:35.559 00:08:35.559 real 0m0.946s 00:08:35.559 user 0m0.046s 00:08:35.559 sys 0m0.010s 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.559 11:14:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:35.559 ************************************ 00:08:35.559 END TEST accel_assign_opcode 00:08:35.559 ************************************ 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:35.559 11:14:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 763636 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 763636 ']' 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 763636 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 763636 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 763636' 00:08:35.559 killing process with pid 763636 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@967 -- # kill 763636 00:08:35.559 11:14:21 accel_rpc -- common/autotest_common.sh@972 -- # wait 763636 00:08:38.091 00:08:38.091 real 0m4.408s 00:08:38.091 user 0m4.360s 00:08:38.091 sys 0m0.533s 00:08:38.091 11:14:24 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.091 11:14:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.091 ************************************ 00:08:38.091 END TEST accel_rpc 00:08:38.091 ************************************ 00:08:38.091 11:14:24 -- common/autotest_common.sh@1142 -- # return 0 00:08:38.091 11:14:24 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.091 11:14:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:38.091 11:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.091 11:14:24 -- common/autotest_common.sh@10 -- # set +x 00:08:38.091 ************************************ 00:08:38.091 START TEST app_cmdline 00:08:38.091 ************************************ 00:08:38.091 11:14:24 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.091 * Looking for test storage... 00:08:38.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:38.091 11:14:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:38.091 11:14:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=764400 00:08:38.091 11:14:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 764400 00:08:38.091 11:14:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:38.091 11:14:24 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 764400 ']' 00:08:38.091 11:14:24 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.091 11:14:24 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.091 11:14:24 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.091 11:14:24 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.091 11:14:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.349 [2024-07-12 11:14:24.496750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:38.349 [2024-07-12 11:14:24.496864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764400 ] 00:08:38.350 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.350 [2024-07-12 11:14:24.599629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.608 [2024-07-12 11:14:24.811065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.543 11:14:25 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.543 11:14:25 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:39.543 { 00:08:39.543 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:39.543 "fields": { 00:08:39.543 "major": 24, 00:08:39.543 "minor": 9, 00:08:39.543 "patch": 0, 00:08:39.543 "suffix": "-pre", 00:08:39.543 "commit": "719d03c6a" 00:08:39.543 } 00:08:39.543 } 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:39.543 11:14:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:39.543 11:14:25 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.543 11:14:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.802 11:14:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:39.802 11:14:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:39.802 11:14:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.802 11:14:25 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.802 request: 00:08:39.802 { 00:08:39.802 "method": "env_dpdk_get_mem_stats", 00:08:39.802 "req_id": 1 00:08:39.802 } 00:08:39.802 Got JSON-RPC error response 00:08:39.802 response: 00:08:39.802 { 00:08:39.802 "code": -32601, 00:08:39.802 "message": "Method not found" 00:08:39.802 } 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:39.802 11:14:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 764400 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 764400 ']' 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 764400 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 764400 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 764400' 00:08:39.802 killing process with pid 764400 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@967 -- # kill 764400 00:08:39.802 11:14:26 app_cmdline -- common/autotest_common.sh@972 -- # wait 764400 00:08:42.335 00:08:42.335 real 0m4.289s 00:08:42.335 user 0m4.481s 00:08:42.335 sys 0m0.564s 00:08:42.335 11:14:28 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.335 11:14:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.335 ************************************ 00:08:42.335 END TEST app_cmdline 00:08:42.335 ************************************ 00:08:42.335 11:14:28 -- common/autotest_common.sh@1142 -- # return 0 00:08:42.335 11:14:28 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.335 11:14:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.335 11:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.335 11:14:28 -- common/autotest_common.sh@10 -- # set +x 00:08:42.335 ************************************ 00:08:42.335 START TEST version 00:08:42.335 ************************************ 00:08:42.335 11:14:28 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.594 * Looking for test storage... 00:08:42.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:42.594 11:14:28 version -- app/version.sh@17 -- # get_header_version major 00:08:42.594 11:14:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # cut -f2 00:08:42.594 11:14:28 version -- app/version.sh@17 -- # major=24 00:08:42.594 11:14:28 version -- app/version.sh@18 -- # get_header_version minor 00:08:42.594 11:14:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # cut -f2 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.594 11:14:28 version -- app/version.sh@18 -- # minor=9 00:08:42.594 11:14:28 version -- app/version.sh@19 -- # get_header_version patch 00:08:42.594 11:14:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # cut -f2 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.594 11:14:28 version -- app/version.sh@19 -- # patch=0 00:08:42.594 11:14:28 version -- app/version.sh@20 -- # get_header_version suffix 00:08:42.594 11:14:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # cut -f2 00:08:42.594 11:14:28 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.594 11:14:28 version -- app/version.sh@20 -- # suffix=-pre 00:08:42.594 11:14:28 version -- app/version.sh@22 -- # version=24.9 00:08:42.594 11:14:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:42.594 11:14:28 version -- app/version.sh@28 -- # version=24.9rc0 00:08:42.594 11:14:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:42.594 11:14:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:42.594 11:14:28 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:42.594 11:14:28 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:42.594 00:08:42.594 real 0m0.138s 00:08:42.594 user 0m0.067s 00:08:42.594 sys 0m0.105s 00:08:42.594 11:14:28 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.594 11:14:28 version -- common/autotest_common.sh@10 -- # set +x 00:08:42.594 ************************************ 00:08:42.594 END TEST version 00:08:42.594 ************************************ 00:08:42.594 11:14:28 -- common/autotest_common.sh@1142 -- # return 0 00:08:42.594 11:14:28 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@198 -- # uname -s 00:08:42.594 11:14:28 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:42.594 11:14:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:42.594 11:14:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:42.594 11:14:28 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:42.594 11:14:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.594 11:14:28 -- common/autotest_common.sh@10 -- # set +x 00:08:42.594 11:14:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:42.594 11:14:28 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:42.594 11:14:28 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:42.594 11:14:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.594 11:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.594 11:14:28 -- common/autotest_common.sh@10 -- # set +x 00:08:42.594 ************************************ 00:08:42.594 START TEST nvmf_tcp 00:08:42.594 ************************************ 00:08:42.594 11:14:28 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:42.853 * Looking for test storage... 00:08:42.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.853 11:14:28 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.853 11:14:29 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.853 11:14:29 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.853 11:14:29 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.853 11:14:29 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:42.853 11:14:29 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:42.853 11:14:29 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.853 11:14:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:42.853 11:14:29 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:42.853 11:14:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.853 11:14:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.853 11:14:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.853 ************************************ 00:08:42.853 START TEST nvmf_example 00:08:42.853 ************************************ 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:42.853 * Looking for test storage... 00:08:42.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.853 11:14:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.177 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.177 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.177 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.178 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.178 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:08:48.178 00:08:48.178 --- 10.0.0.2 ping statistics --- 00:08:48.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.178 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:48.178 00:08:48.178 --- 10.0.0.1 ping statistics --- 00:08:48.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.178 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=768271 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 768271 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 768271 ']' 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.178 11:14:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.436 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:49.374 11:14:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:49.374 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.375 Initializing NVMe Controllers 00:08:59.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:59.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:59.375 Initialization complete. Launching workers. 00:08:59.375 ======================================================== 00:08:59.375 Latency(us) 00:08:59.375 Device Information : IOPS MiB/s Average min max 00:08:59.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16075.30 62.79 3983.40 800.10 15650.53 00:08:59.376 ======================================================== 00:08:59.376 Total : 16075.30 62.79 3983.40 800.10 15650.53 00:08:59.376 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.634 rmmod nvme_tcp 00:08:59.634 rmmod nvme_fabrics 00:08:59.634 rmmod nvme_keyring 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 768271 ']' 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 768271 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 768271 ']' 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 768271 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 768271 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 768271' 00:08:59.634 killing process with pid 768271 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 768271 00:08:59.634 11:14:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 768271 00:09:01.012 nvmf threads initialize successfully 00:09:01.012 bdev subsystem init successfully 00:09:01.012 created a nvmf target service 00:09:01.012 create targets's poll groups done 00:09:01.012 all subsystems of target started 00:09:01.012 nvmf target is running 00:09:01.012 all subsystems of target stopped 00:09:01.012 destroy targets's poll groups done 00:09:01.012 destroyed the nvmf target service 00:09:01.012 bdev subsystem finish successfully 00:09:01.012 nvmf threads destroy successfully 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.012 11:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.919 11:14:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.919 11:14:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:02.919 11:14:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.919 11:14:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.180 00:09:03.180 real 0m20.251s 00:09:03.180 user 0m49.064s 00:09:03.180 sys 0m5.439s 00:09:03.180 11:14:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.180 11:14:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.180 ************************************ 00:09:03.180 END TEST nvmf_example 00:09:03.180 ************************************ 00:09:03.180 11:14:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:03.180 11:14:49 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:03.180 11:14:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.180 11:14:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.180 11:14:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.180 ************************************ 00:09:03.180 START TEST nvmf_filesystem 00:09:03.180 ************************************ 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:03.180 * Looking for test storage... 00:09:03.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:03.180 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:03.181 #define SPDK_CONFIG_H 00:09:03.181 #define SPDK_CONFIG_APPS 1 00:09:03.181 #define SPDK_CONFIG_ARCH native 00:09:03.181 #define SPDK_CONFIG_ASAN 1 00:09:03.181 #undef SPDK_CONFIG_AVAHI 00:09:03.181 #undef SPDK_CONFIG_CET 00:09:03.181 #define SPDK_CONFIG_COVERAGE 1 00:09:03.181 #define SPDK_CONFIG_CROSS_PREFIX 00:09:03.181 #undef SPDK_CONFIG_CRYPTO 00:09:03.181 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:03.181 #undef SPDK_CONFIG_CUSTOMOCF 00:09:03.181 #undef SPDK_CONFIG_DAOS 00:09:03.181 #define SPDK_CONFIG_DAOS_DIR 00:09:03.181 #define SPDK_CONFIG_DEBUG 1 00:09:03.181 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:03.181 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.181 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:03.181 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:03.181 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:03.181 #undef SPDK_CONFIG_DPDK_UADK 00:09:03.181 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.181 #define SPDK_CONFIG_EXAMPLES 1 00:09:03.181 #undef SPDK_CONFIG_FC 00:09:03.181 #define SPDK_CONFIG_FC_PATH 00:09:03.181 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:03.181 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:03.181 #undef SPDK_CONFIG_FUSE 00:09:03.181 #undef SPDK_CONFIG_FUZZER 00:09:03.181 #define SPDK_CONFIG_FUZZER_LIB 00:09:03.181 #undef SPDK_CONFIG_GOLANG 00:09:03.181 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:03.181 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:03.181 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:03.181 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:03.181 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:03.181 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:03.181 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:03.181 #define SPDK_CONFIG_IDXD 1 00:09:03.181 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:03.181 #undef SPDK_CONFIG_IPSEC_MB 00:09:03.181 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:03.181 #define SPDK_CONFIG_ISAL 1 00:09:03.181 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:03.181 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:03.181 #define SPDK_CONFIG_LIBDIR 00:09:03.181 #undef SPDK_CONFIG_LTO 00:09:03.181 #define SPDK_CONFIG_MAX_LCORES 128 00:09:03.181 #define SPDK_CONFIG_NVME_CUSE 1 00:09:03.181 #undef SPDK_CONFIG_OCF 00:09:03.181 #define SPDK_CONFIG_OCF_PATH 00:09:03.181 #define SPDK_CONFIG_OPENSSL_PATH 00:09:03.181 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:03.181 #define SPDK_CONFIG_PGO_DIR 00:09:03.181 #undef SPDK_CONFIG_PGO_USE 00:09:03.181 #define SPDK_CONFIG_PREFIX /usr/local 00:09:03.181 #undef SPDK_CONFIG_RAID5F 00:09:03.181 #undef SPDK_CONFIG_RBD 00:09:03.181 #define SPDK_CONFIG_RDMA 1 00:09:03.181 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:03.181 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:03.181 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:03.181 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:03.181 #define SPDK_CONFIG_SHARED 1 00:09:03.181 #undef SPDK_CONFIG_SMA 00:09:03.181 #define SPDK_CONFIG_TESTS 1 00:09:03.181 #undef SPDK_CONFIG_TSAN 00:09:03.181 #define SPDK_CONFIG_UBLK 1 00:09:03.181 #define SPDK_CONFIG_UBSAN 1 00:09:03.181 #undef SPDK_CONFIG_UNIT_TESTS 00:09:03.181 #undef SPDK_CONFIG_URING 00:09:03.181 #define SPDK_CONFIG_URING_PATH 00:09:03.181 #undef SPDK_CONFIG_URING_ZNS 00:09:03.181 #undef SPDK_CONFIG_USDT 00:09:03.181 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:03.181 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:03.181 #undef SPDK_CONFIG_VFIO_USER 00:09:03.181 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:03.181 #define SPDK_CONFIG_VHOST 1 00:09:03.181 #define SPDK_CONFIG_VIRTIO 1 00:09:03.181 #undef SPDK_CONFIG_VTUNE 00:09:03.181 #define SPDK_CONFIG_VTUNE_DIR 00:09:03.181 #define SPDK_CONFIG_WERROR 1 00:09:03.181 #define SPDK_CONFIG_WPDK_DIR 00:09:03.181 #undef SPDK_CONFIG_XNVME 00:09:03.181 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.181 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:03.182 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:03.443 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 770895 ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 770895 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.DHCjcb 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DHCjcb/tests/target /tmp/spdk.DHCjcb 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=190561013760 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974303744 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5413289984 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97982439424 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986818048 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987153920 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=335872 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:03.444 * Looking for test storage... 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=190561013760 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7627882496 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.444 11:14:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.445 11:14:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.713 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:08.714 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:08.714 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:08.714 Found net devices under 0000:86:00.0: cvl_0_0 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:08.714 Found net devices under 0000:86:00.1: cvl_0_1 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.714 11:14:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.714 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.714 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.714 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.714 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:09:08.972 00:09:08.972 --- 10.0.0.2 ping statistics --- 00:09:08.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.972 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:09:08.972 00:09:08.972 --- 10.0.0.1 ping statistics --- 00:09:08.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.972 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.972 ************************************ 00:09:08.972 START TEST nvmf_filesystem_no_in_capsule 00:09:08.972 ************************************ 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.972 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=773934 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 773934 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 773934 ']' 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.973 11:14:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.973 [2024-07-12 11:14:55.309127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:08.973 [2024-07-12 11:14:55.309210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.230 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.230 [2024-07-12 11:14:55.417107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.489 [2024-07-12 11:14:55.651262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.489 [2024-07-12 11:14:55.651302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.489 [2024-07-12 11:14:55.651314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.489 [2024-07-12 11:14:55.651340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.489 [2024-07-12 11:14:55.651350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.489 [2024-07-12 11:14:55.651430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.489 [2024-07-12 11:14:55.651508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.489 [2024-07-12 11:14:55.651562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.489 [2024-07-12 11:14:55.651573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.747 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.747 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:09.747 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.747 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.747 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.006 [2024-07-12 11:14:56.131437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.006 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 Malloc1 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 [2024-07-12 11:14:56.840447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:10.574 { 00:09:10.574 "name": "Malloc1", 00:09:10.574 "aliases": [ 00:09:10.574 "e67d9ed1-01cc-462a-993f-f3b75315d82e" 00:09:10.574 ], 00:09:10.574 "product_name": "Malloc disk", 00:09:10.574 "block_size": 512, 00:09:10.574 "num_blocks": 1048576, 00:09:10.574 "uuid": "e67d9ed1-01cc-462a-993f-f3b75315d82e", 00:09:10.574 "assigned_rate_limits": { 00:09:10.574 "rw_ios_per_sec": 0, 00:09:10.574 "rw_mbytes_per_sec": 0, 00:09:10.574 "r_mbytes_per_sec": 0, 00:09:10.574 "w_mbytes_per_sec": 0 00:09:10.574 }, 00:09:10.574 "claimed": true, 00:09:10.574 "claim_type": "exclusive_write", 00:09:10.574 "zoned": false, 00:09:10.574 "supported_io_types": { 00:09:10.574 "read": true, 00:09:10.574 "write": true, 00:09:10.574 "unmap": true, 00:09:10.574 "flush": true, 00:09:10.574 "reset": true, 00:09:10.574 "nvme_admin": false, 00:09:10.574 "nvme_io": false, 00:09:10.574 "nvme_io_md": false, 00:09:10.574 "write_zeroes": true, 00:09:10.574 "zcopy": true, 00:09:10.574 "get_zone_info": false, 00:09:10.574 "zone_management": false, 00:09:10.574 "zone_append": false, 00:09:10.574 "compare": false, 00:09:10.574 "compare_and_write": false, 00:09:10.574 "abort": true, 00:09:10.574 "seek_hole": false, 00:09:10.574 "seek_data": false, 00:09:10.574 "copy": true, 00:09:10.574 "nvme_iov_md": false 00:09:10.574 }, 00:09:10.574 "memory_domains": [ 00:09:10.574 { 00:09:10.574 "dma_device_id": "system", 00:09:10.574 "dma_device_type": 1 00:09:10.574 }, 00:09:10.574 { 00:09:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.574 "dma_device_type": 2 00:09:10.574 } 00:09:10.574 ], 00:09:10.574 "driver_specific": {} 00:09:10.574 } 00:09:10.574 ]' 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:10.574 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:10.832 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:10.832 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:10.832 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:10.832 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:10.832 11:14:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.769 11:14:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.769 11:14:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.769 11:14:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.769 11:14:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.769 11:14:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:14.302 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:14.561 11:15:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.499 ************************************ 00:09:15.499 START TEST filesystem_ext4 00:09:15.499 ************************************ 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:15.499 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:15.500 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:15.500 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:15.500 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:15.500 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:15.500 11:15:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:15.500 mke2fs 1.46.5 (30-Dec-2021) 00:09:15.759 Discarding device blocks: 0/522240 done 00:09:15.759 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:15.759 Filesystem UUID: f3d890b0-9271-4deb-8d58-a66a6f499ce1 00:09:15.759 Superblock backups stored on blocks: 00:09:15.759 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:15.759 00:09:15.759 Allocating group tables: 0/64 done 00:09:15.759 Writing inode tables: 0/64 done 00:09:18.551 Creating journal (8192 blocks): done 00:09:18.551 Writing superblocks and filesystem accounting information: 0/64 done 00:09:18.551 00:09:18.551 11:15:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:18.551 11:15:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 773934 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.490 00:09:19.490 real 0m3.767s 00:09:19.490 user 0m0.035s 00:09:19.490 sys 0m0.055s 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:19.490 ************************************ 00:09:19.490 END TEST filesystem_ext4 00:09:19.490 ************************************ 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.490 ************************************ 00:09:19.490 START TEST filesystem_btrfs 00:09:19.490 ************************************ 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:19.490 btrfs-progs v6.6.2 00:09:19.490 See https://btrfs.readthedocs.io for more information. 00:09:19.490 00:09:19.490 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:19.490 NOTE: several default settings have changed in version 5.15, please make sure 00:09:19.490 this does not affect your deployments: 00:09:19.490 - DUP for metadata (-m dup) 00:09:19.490 - enabled no-holes (-O no-holes) 00:09:19.490 - enabled free-space-tree (-R free-space-tree) 00:09:19.490 00:09:19.490 Label: (null) 00:09:19.490 UUID: 706eefcf-74fd-46b4-9a2b-a6349b7b5d25 00:09:19.490 Node size: 16384 00:09:19.490 Sector size: 4096 00:09:19.490 Filesystem size: 510.00MiB 00:09:19.490 Block group profiles: 00:09:19.490 Data: single 8.00MiB 00:09:19.490 Metadata: DUP 32.00MiB 00:09:19.490 System: DUP 8.00MiB 00:09:19.490 SSD detected: yes 00:09:19.490 Zoned device: no 00:09:19.490 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:19.490 Runtime features: free-space-tree 00:09:19.490 Checksum: crc32c 00:09:19.490 Number of devices: 1 00:09:19.490 Devices: 00:09:19.490 ID SIZE PATH 00:09:19.490 1 510.00MiB /dev/nvme0n1p1 00:09:19.490 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:19.490 11:15:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 773934 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:20.425 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:20.426 00:09:20.426 real 0m0.888s 00:09:20.426 user 0m0.020s 00:09:20.426 sys 0m0.122s 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:20.426 ************************************ 00:09:20.426 END TEST filesystem_btrfs 00:09:20.426 ************************************ 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.426 ************************************ 00:09:20.426 START TEST filesystem_xfs 00:09:20.426 ************************************ 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:20.426 11:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:20.426 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:20.426 = sectsz=512 attr=2, projid32bit=1 00:09:20.426 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:20.426 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:20.426 data = bsize=4096 blocks=130560, imaxpct=25 00:09:20.426 = sunit=0 swidth=0 blks 00:09:20.426 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:20.426 log =internal log bsize=4096 blocks=16384, version=2 00:09:20.426 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:20.426 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:21.363 Discarding blocks...Done. 00:09:21.363 11:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:21.363 11:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 773934 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:23.263 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:23.522 00:09:23.522 real 0m2.987s 00:09:23.522 user 0m0.019s 00:09:23.522 sys 0m0.074s 00:09:23.522 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.522 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:23.522 ************************************ 00:09:23.522 END TEST filesystem_xfs 00:09:23.522 ************************************ 00:09:23.522 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:23.522 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:23.782 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:23.782 11:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 773934 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 773934 ']' 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 773934 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 773934 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 773934' 00:09:23.782 killing process with pid 773934 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 773934 00:09:23.782 11:15:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 773934 00:09:27.070 11:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:27.070 00:09:27.070 real 0m17.773s 00:09:27.070 user 1m7.629s 00:09:27.070 sys 0m1.386s 00:09:27.070 11:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.070 11:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.070 ************************************ 00:09:27.070 END TEST nvmf_filesystem_no_in_capsule 00:09:27.070 ************************************ 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.070 ************************************ 00:09:27.070 START TEST nvmf_filesystem_in_capsule 00:09:27.070 ************************************ 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=777645 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 777645 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 777645 ']' 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.070 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.070 [2024-07-12 11:15:13.152495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:27.070 [2024-07-12 11:15:13.152580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.070 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.070 [2024-07-12 11:15:13.261029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.328 [2024-07-12 11:15:13.482964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.328 [2024-07-12 11:15:13.483011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.328 [2024-07-12 11:15:13.483023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.328 [2024-07-12 11:15:13.483047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.328 [2024-07-12 11:15:13.483057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.328 [2024-07-12 11:15:13.483128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.328 [2024-07-12 11:15:13.483201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.328 [2024-07-12 11:15:13.483266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.328 [2024-07-12 11:15:13.483276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.587 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.587 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:27.587 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:27.587 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:27.587 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.845 [2024-07-12 11:15:13.968251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.845 11:15:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.413 Malloc1 00:09:28.413 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.413 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.413 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.413 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.413 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.413 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.413 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.414 [2024-07-12 11:15:14.671346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:28.414 { 00:09:28.414 "name": "Malloc1", 00:09:28.414 "aliases": [ 00:09:28.414 "9234f38f-7d9c-4dd9-b4a1-1c20d006c328" 00:09:28.414 ], 00:09:28.414 "product_name": "Malloc disk", 00:09:28.414 "block_size": 512, 00:09:28.414 "num_blocks": 1048576, 00:09:28.414 "uuid": "9234f38f-7d9c-4dd9-b4a1-1c20d006c328", 00:09:28.414 "assigned_rate_limits": { 00:09:28.414 "rw_ios_per_sec": 0, 00:09:28.414 "rw_mbytes_per_sec": 0, 00:09:28.414 "r_mbytes_per_sec": 0, 00:09:28.414 "w_mbytes_per_sec": 0 00:09:28.414 }, 00:09:28.414 "claimed": true, 00:09:28.414 "claim_type": "exclusive_write", 00:09:28.414 "zoned": false, 00:09:28.414 "supported_io_types": { 00:09:28.414 "read": true, 00:09:28.414 "write": true, 00:09:28.414 "unmap": true, 00:09:28.414 "flush": true, 00:09:28.414 "reset": true, 00:09:28.414 "nvme_admin": false, 00:09:28.414 "nvme_io": false, 00:09:28.414 "nvme_io_md": false, 00:09:28.414 "write_zeroes": true, 00:09:28.414 "zcopy": true, 00:09:28.414 "get_zone_info": false, 00:09:28.414 "zone_management": false, 00:09:28.414 "zone_append": false, 00:09:28.414 "compare": false, 00:09:28.414 "compare_and_write": false, 00:09:28.414 "abort": true, 00:09:28.414 "seek_hole": false, 00:09:28.414 "seek_data": false, 00:09:28.414 "copy": true, 00:09:28.414 "nvme_iov_md": false 00:09:28.414 }, 00:09:28.414 "memory_domains": [ 00:09:28.414 { 00:09:28.414 "dma_device_id": "system", 00:09:28.414 "dma_device_type": 1 00:09:28.414 }, 00:09:28.414 { 00:09:28.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.414 "dma_device_type": 2 00:09:28.414 } 00:09:28.414 ], 00:09:28.414 "driver_specific": {} 00:09:28.414 } 00:09:28.414 ]' 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:28.414 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:28.673 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:28.673 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:28.673 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:28.674 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:28.674 11:15:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.048 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.048 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.048 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.048 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.048 11:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:31.955 11:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:31.955 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:32.892 11:15:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:33.829 11:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:33.829 11:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:33.829 11:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:33.829 11:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.829 11:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:33.829 ************************************ 00:09:33.829 START TEST filesystem_in_capsule_ext4 00:09:33.829 ************************************ 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:33.829 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:33.830 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:33.830 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:33.830 11:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:33.830 mke2fs 1.46.5 (30-Dec-2021) 00:09:33.830 Discarding device blocks: 0/522240 done 00:09:33.830 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:33.830 Filesystem UUID: 12b0bfc2-775d-4e73-9be4-e16653703c5b 00:09:33.830 Superblock backups stored on blocks: 00:09:33.830 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:33.830 00:09:33.830 Allocating group tables: 0/64 done 00:09:33.830 Writing inode tables: 0/64 done 00:09:35.206 Creating journal (8192 blocks): done 00:09:35.206 Writing superblocks and filesystem accounting information: 0/64 done 00:09:35.206 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 777645 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:35.206 00:09:35.206 real 0m1.471s 00:09:35.206 user 0m0.015s 00:09:35.206 sys 0m0.076s 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:35.206 ************************************ 00:09:35.206 END TEST filesystem_in_capsule_ext4 00:09:35.206 ************************************ 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.206 ************************************ 00:09:35.206 START TEST filesystem_in_capsule_btrfs 00:09:35.206 ************************************ 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:35.206 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:35.775 btrfs-progs v6.6.2 00:09:35.775 See https://btrfs.readthedocs.io for more information. 00:09:35.775 00:09:35.775 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:35.775 NOTE: several default settings have changed in version 5.15, please make sure 00:09:35.775 this does not affect your deployments: 00:09:35.775 - DUP for metadata (-m dup) 00:09:35.775 - enabled no-holes (-O no-holes) 00:09:35.775 - enabled free-space-tree (-R free-space-tree) 00:09:35.775 00:09:35.775 Label: (null) 00:09:35.775 UUID: a352d342-de1f-4acb-8bb4-289d087010c0 00:09:35.775 Node size: 16384 00:09:35.775 Sector size: 4096 00:09:35.775 Filesystem size: 510.00MiB 00:09:35.775 Block group profiles: 00:09:35.775 Data: single 8.00MiB 00:09:35.775 Metadata: DUP 32.00MiB 00:09:35.775 System: DUP 8.00MiB 00:09:35.775 SSD detected: yes 00:09:35.775 Zoned device: no 00:09:35.775 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:35.775 Runtime features: free-space-tree 00:09:35.775 Checksum: crc32c 00:09:35.775 Number of devices: 1 00:09:35.775 Devices: 00:09:35.775 ID SIZE PATH 00:09:35.775 1 510.00MiB /dev/nvme0n1p1 00:09:35.775 00:09:35.775 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:35.775 11:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 777645 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:36.710 00:09:36.710 real 0m1.254s 00:09:36.710 user 0m0.022s 00:09:36.710 sys 0m0.127s 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:36.710 ************************************ 00:09:36.710 END TEST filesystem_in_capsule_btrfs 00:09:36.710 ************************************ 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:36.710 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:36.711 ************************************ 00:09:36.711 START TEST filesystem_in_capsule_xfs 00:09:36.711 ************************************ 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:36.711 11:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:36.711 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:36.711 = sectsz=512 attr=2, projid32bit=1 00:09:36.711 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:36.711 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:36.711 data = bsize=4096 blocks=130560, imaxpct=25 00:09:36.711 = sunit=0 swidth=0 blks 00:09:36.711 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:36.711 log =internal log bsize=4096 blocks=16384, version=2 00:09:36.711 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:36.711 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:37.646 Discarding blocks...Done. 00:09:37.646 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:37.646 11:15:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 777645 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:40.179 00:09:40.179 real 0m3.629s 00:09:40.179 user 0m0.024s 00:09:40.179 sys 0m0.069s 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:40.179 ************************************ 00:09:40.179 END TEST filesystem_in_capsule_xfs 00:09:40.179 ************************************ 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:40.179 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:40.747 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:40.747 11:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 777645 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 777645 ']' 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 777645 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 777645 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 777645' 00:09:41.007 killing process with pid 777645 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 777645 00:09:41.007 11:15:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 777645 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:44.293 00:09:44.293 real 0m16.946s 00:09:44.293 user 1m4.337s 00:09:44.293 sys 0m1.391s 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.293 ************************************ 00:09:44.293 END TEST nvmf_filesystem_in_capsule 00:09:44.293 ************************************ 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.293 rmmod nvme_tcp 00:09:44.293 rmmod nvme_fabrics 00:09:44.293 rmmod nvme_keyring 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.293 11:15:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.200 11:15:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.200 00:09:46.200 real 0m42.826s 00:09:46.200 user 2m13.749s 00:09:46.200 sys 0m7.100s 00:09:46.200 11:15:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.200 11:15:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:46.200 ************************************ 00:09:46.200 END TEST nvmf_filesystem 00:09:46.200 ************************************ 00:09:46.200 11:15:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:46.200 11:15:32 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:46.200 11:15:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:46.200 11:15:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.200 11:15:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.200 ************************************ 00:09:46.200 START TEST nvmf_target_discovery 00:09:46.200 ************************************ 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:46.200 * Looking for test storage... 00:09:46.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.200 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.201 11:15:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.486 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.486 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.486 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.487 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.487 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:51.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:09:51.487 00:09:51.487 --- 10.0.0.2 ping statistics --- 00:09:51.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.487 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:51.487 00:09:51.487 --- 10.0.0.1 ping statistics --- 00:09:51.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.487 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=784125 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 784125 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 784125 ']' 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.487 11:15:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.487 [2024-07-12 11:15:37.729139] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:51.487 [2024-07-12 11:15:37.729244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.487 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.813 [2024-07-12 11:15:37.839231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.813 [2024-07-12 11:15:38.056955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.813 [2024-07-12 11:15:38.056999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.813 [2024-07-12 11:15:38.057010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.813 [2024-07-12 11:15:38.057018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.813 [2024-07-12 11:15:38.057027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.813 [2024-07-12 11:15:38.057139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.813 [2024-07-12 11:15:38.057248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.813 [2024-07-12 11:15:38.057311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.813 [2024-07-12 11:15:38.057321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.411 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.411 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:52.411 11:15:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:52.411 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:52.411 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 [2024-07-12 11:15:38.570674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 Null1 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 [2024-07-12 11:15:38.618945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 Null2 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 Null3 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 Null4 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.412 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:52.706 00:09:52.706 Discovery Log Number of Records 6, Generation counter 6 00:09:52.706 =====Discovery Log Entry 0====== 00:09:52.706 trtype: tcp 00:09:52.706 adrfam: ipv4 00:09:52.706 subtype: current discovery subsystem 00:09:52.706 treq: not required 00:09:52.706 portid: 0 00:09:52.706 trsvcid: 4420 00:09:52.706 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:52.706 traddr: 10.0.0.2 00:09:52.706 eflags: explicit discovery connections, duplicate discovery information 00:09:52.706 sectype: none 00:09:52.706 =====Discovery Log Entry 1====== 00:09:52.706 trtype: tcp 00:09:52.706 adrfam: ipv4 00:09:52.706 subtype: nvme subsystem 00:09:52.706 treq: not required 00:09:52.706 portid: 0 00:09:52.706 trsvcid: 4420 00:09:52.706 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:52.706 traddr: 10.0.0.2 00:09:52.706 eflags: none 00:09:52.706 sectype: none 00:09:52.706 =====Discovery Log Entry 2====== 00:09:52.706 trtype: tcp 00:09:52.706 adrfam: ipv4 00:09:52.706 subtype: nvme subsystem 00:09:52.706 treq: not required 00:09:52.706 portid: 0 00:09:52.706 trsvcid: 4420 00:09:52.706 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:52.706 traddr: 10.0.0.2 00:09:52.706 eflags: none 00:09:52.706 sectype: none 00:09:52.706 =====Discovery Log Entry 3====== 00:09:52.706 trtype: tcp 00:09:52.706 adrfam: ipv4 00:09:52.706 subtype: nvme subsystem 00:09:52.706 treq: not required 00:09:52.706 portid: 0 00:09:52.706 trsvcid: 4420 00:09:52.706 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:52.706 traddr: 10.0.0.2 00:09:52.706 eflags: none 00:09:52.706 sectype: none 00:09:52.706 =====Discovery Log Entry 4====== 00:09:52.706 trtype: tcp 00:09:52.706 adrfam: ipv4 00:09:52.706 subtype: nvme subsystem 00:09:52.706 treq: not required 00:09:52.706 portid: 0 00:09:52.706 trsvcid: 4420 00:09:52.706 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:52.706 traddr: 10.0.0.2 00:09:52.706 eflags: none 00:09:52.706 sectype: none 00:09:52.706 =====Discovery Log Entry 5====== 00:09:52.706 trtype: tcp 00:09:52.706 adrfam: ipv4 00:09:52.706 subtype: discovery subsystem referral 00:09:52.706 treq: not required 00:09:52.706 portid: 0 00:09:52.706 trsvcid: 4430 00:09:52.706 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:52.706 traddr: 10.0.0.2 00:09:52.706 eflags: none 00:09:52.706 sectype: none 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:52.706 Perform nvmf subsystem discovery via RPC 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.706 [ 00:09:52.706 { 00:09:52.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:52.706 "subtype": "Discovery", 00:09:52.706 "listen_addresses": [ 00:09:52.706 { 00:09:52.706 "trtype": "TCP", 00:09:52.706 "adrfam": "IPv4", 00:09:52.706 "traddr": "10.0.0.2", 00:09:52.706 "trsvcid": "4420" 00:09:52.706 } 00:09:52.706 ], 00:09:52.706 "allow_any_host": true, 00:09:52.706 "hosts": [] 00:09:52.706 }, 00:09:52.706 { 00:09:52.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.706 "subtype": "NVMe", 00:09:52.706 "listen_addresses": [ 00:09:52.706 { 00:09:52.706 "trtype": "TCP", 00:09:52.706 "adrfam": "IPv4", 00:09:52.706 "traddr": "10.0.0.2", 00:09:52.706 "trsvcid": "4420" 00:09:52.706 } 00:09:52.706 ], 00:09:52.706 "allow_any_host": true, 00:09:52.706 "hosts": [], 00:09:52.706 "serial_number": "SPDK00000000000001", 00:09:52.706 "model_number": "SPDK bdev Controller", 00:09:52.706 "max_namespaces": 32, 00:09:52.706 "min_cntlid": 1, 00:09:52.706 "max_cntlid": 65519, 00:09:52.706 "namespaces": [ 00:09:52.706 { 00:09:52.706 "nsid": 1, 00:09:52.706 "bdev_name": "Null1", 00:09:52.706 "name": "Null1", 00:09:52.706 "nguid": "6D11AA932A7D40218DC7F4276A83095E", 00:09:52.706 "uuid": "6d11aa93-2a7d-4021-8dc7-f4276a83095e" 00:09:52.706 } 00:09:52.706 ] 00:09:52.706 }, 00:09:52.706 { 00:09:52.706 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:52.706 "subtype": "NVMe", 00:09:52.706 "listen_addresses": [ 00:09:52.706 { 00:09:52.706 "trtype": "TCP", 00:09:52.706 "adrfam": "IPv4", 00:09:52.706 "traddr": "10.0.0.2", 00:09:52.706 "trsvcid": "4420" 00:09:52.706 } 00:09:52.706 ], 00:09:52.706 "allow_any_host": true, 00:09:52.706 "hosts": [], 00:09:52.706 "serial_number": "SPDK00000000000002", 00:09:52.706 "model_number": "SPDK bdev Controller", 00:09:52.706 "max_namespaces": 32, 00:09:52.706 "min_cntlid": 1, 00:09:52.706 "max_cntlid": 65519, 00:09:52.706 "namespaces": [ 00:09:52.706 { 00:09:52.706 "nsid": 1, 00:09:52.706 "bdev_name": "Null2", 00:09:52.706 "name": "Null2", 00:09:52.706 "nguid": "67009D3068D046A3AA5CA394B4A8AA9F", 00:09:52.706 "uuid": "67009d30-68d0-46a3-aa5c-a394b4a8aa9f" 00:09:52.706 } 00:09:52.706 ] 00:09:52.706 }, 00:09:52.706 { 00:09:52.706 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:52.706 "subtype": "NVMe", 00:09:52.706 "listen_addresses": [ 00:09:52.706 { 00:09:52.706 "trtype": "TCP", 00:09:52.706 "adrfam": "IPv4", 00:09:52.706 "traddr": "10.0.0.2", 00:09:52.706 "trsvcid": "4420" 00:09:52.706 } 00:09:52.706 ], 00:09:52.706 "allow_any_host": true, 00:09:52.706 "hosts": [], 00:09:52.706 "serial_number": "SPDK00000000000003", 00:09:52.706 "model_number": "SPDK bdev Controller", 00:09:52.706 "max_namespaces": 32, 00:09:52.706 "min_cntlid": 1, 00:09:52.706 "max_cntlid": 65519, 00:09:52.706 "namespaces": [ 00:09:52.706 { 00:09:52.706 "nsid": 1, 00:09:52.706 "bdev_name": "Null3", 00:09:52.706 "name": "Null3", 00:09:52.706 "nguid": "AA41F7D360994719A2B18B05BFEB2804", 00:09:52.706 "uuid": "aa41f7d3-6099-4719-a2b1-8b05bfeb2804" 00:09:52.706 } 00:09:52.706 ] 00:09:52.706 }, 00:09:52.706 { 00:09:52.706 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:52.706 "subtype": "NVMe", 00:09:52.706 "listen_addresses": [ 00:09:52.706 { 00:09:52.706 "trtype": "TCP", 00:09:52.706 "adrfam": "IPv4", 00:09:52.706 "traddr": "10.0.0.2", 00:09:52.706 "trsvcid": "4420" 00:09:52.706 } 00:09:52.706 ], 00:09:52.706 "allow_any_host": true, 00:09:52.706 "hosts": [], 00:09:52.706 "serial_number": "SPDK00000000000004", 00:09:52.706 "model_number": "SPDK bdev Controller", 00:09:52.706 "max_namespaces": 32, 00:09:52.706 "min_cntlid": 1, 00:09:52.706 "max_cntlid": 65519, 00:09:52.706 "namespaces": [ 00:09:52.706 { 00:09:52.706 "nsid": 1, 00:09:52.706 "bdev_name": "Null4", 00:09:52.706 "name": "Null4", 00:09:52.706 "nguid": "BDC0766D6104481F953E2620C7C35C5A", 00:09:52.706 "uuid": "bdc0766d-6104-481f-953e-2620c7c35c5a" 00:09:52.706 } 00:09:52.706 ] 00:09:52.706 } 00:09:52.706 ] 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.706 11:15:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:52.706 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.707 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:53.021 rmmod nvme_tcp 00:09:53.021 rmmod nvme_fabrics 00:09:53.021 rmmod nvme_keyring 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:53.021 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 784125 ']' 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 784125 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 784125 ']' 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 784125 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784125 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784125' 00:09:53.022 killing process with pid 784125 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 784125 00:09:53.022 11:15:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 784125 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.400 11:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.307 11:15:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.307 00:09:56.307 real 0m10.320s 00:09:56.307 user 0m9.820s 00:09:56.307 sys 0m4.512s 00:09:56.307 11:15:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.307 11:15:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:56.307 ************************************ 00:09:56.307 END TEST nvmf_target_discovery 00:09:56.307 ************************************ 00:09:56.307 11:15:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:56.307 11:15:42 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:56.307 11:15:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.307 11:15:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.307 11:15:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.307 ************************************ 00:09:56.307 START TEST nvmf_referrals 00:09:56.307 ************************************ 00:09:56.307 11:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:56.567 * Looking for test storage... 00:09:56.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:56.567 11:15:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:56.568 11:15:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:01.840 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:01.840 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:01.840 Found net devices under 0000:86:00.0: cvl_0_0 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:01.840 Found net devices under 0000:86:00.1: cvl_0_1 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:01.840 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:01.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:10:01.841 00:10:01.841 --- 10.0.0.2 ping statistics --- 00:10:01.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.841 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:10:01.841 00:10:01.841 --- 10.0.0.1 ping statistics --- 00:10:01.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.841 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=787927 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 787927 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 787927 ']' 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:01.841 11:15:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.841 [2024-07-12 11:15:47.809015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:01.841 [2024-07-12 11:15:47.809119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.841 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.841 [2024-07-12 11:15:47.918984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.841 [2024-07-12 11:15:48.145573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.841 [2024-07-12 11:15:48.145612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.841 [2024-07-12 11:15:48.145624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.841 [2024-07-12 11:15:48.145633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.841 [2024-07-12 11:15:48.145642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.841 [2024-07-12 11:15:48.145715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.841 [2024-07-12 11:15:48.145793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.841 [2024-07-12 11:15:48.145851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.841 [2024-07-12 11:15:48.145862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 [2024-07-12 11:15:48.631259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 [2024-07-12 11:15:48.647452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:02.409 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:02.410 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:02.410 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.410 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:02.410 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.410 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:02.410 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.410 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.668 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:02.669 11:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.928 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:02.929 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:03.187 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:03.187 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:03.187 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:03.187 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:03.187 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:03.187 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:03.445 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:03.704 11:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:03.704 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.964 rmmod nvme_tcp 00:10:03.964 rmmod nvme_fabrics 00:10:03.964 rmmod nvme_keyring 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 787927 ']' 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 787927 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 787927 ']' 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 787927 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787927 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787927' 00:10:03.964 killing process with pid 787927 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 787927 00:10:03.964 11:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 787927 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.343 11:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.878 11:15:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:07.878 00:10:07.878 real 0m11.085s 00:10:07.878 user 0m14.787s 00:10:07.878 sys 0m4.514s 00:10:07.878 11:15:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.878 11:15:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.878 ************************************ 00:10:07.878 END TEST nvmf_referrals 00:10:07.878 ************************************ 00:10:07.878 11:15:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:07.878 11:15:53 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:07.878 11:15:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:07.878 11:15:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.878 11:15:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.878 ************************************ 00:10:07.878 START TEST nvmf_connect_disconnect 00:10:07.878 ************************************ 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:07.878 * Looking for test storage... 00:10:07.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:07.878 11:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:13.152 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:13.152 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:13.152 Found net devices under 0000:86:00.0: cvl_0_0 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.152 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:13.153 Found net devices under 0000:86:00.1: cvl_0_1 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.153 11:15:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:10:13.153 00:10:13.153 --- 10.0.0.2 ping statistics --- 00:10:13.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.153 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:13.153 00:10:13.153 --- 10.0.0.1 ping statistics --- 00:10:13.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.153 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=792007 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 792007 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 792007 ']' 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.153 11:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.153 [2024-07-12 11:15:59.244453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:13.153 [2024-07-12 11:15:59.244542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.153 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.153 [2024-07-12 11:15:59.355718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.412 [2024-07-12 11:15:59.581312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.412 [2024-07-12 11:15:59.581357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.412 [2024-07-12 11:15:59.581368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.412 [2024-07-12 11:15:59.581398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.412 [2024-07-12 11:15:59.581409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.412 [2024-07-12 11:15:59.581474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.412 [2024-07-12 11:15:59.581505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.412 [2024-07-12 11:15:59.581611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.412 [2024-07-12 11:15:59.581622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.671 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.671 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:10:13.671 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.671 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.671 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.930 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.931 [2024-07-12 11:16:00.068956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:13.931 [2024-07-12 11:16:00.192800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:10:13.931 11:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:16.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.713 rmmod nvme_tcp 00:14:05.713 rmmod nvme_fabrics 00:14:05.713 rmmod nvme_keyring 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.713 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 792007 ']' 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 792007 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 792007 ']' 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 792007 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 792007 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 792007' 00:14:05.714 killing process with pid 792007 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 792007 00:14:05.714 11:19:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 792007 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.091 11:19:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.627 11:19:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.627 00:14:09.627 real 4m1.557s 00:14:09.627 user 15m25.115s 00:14:09.627 sys 0m20.245s 00:14:09.627 11:19:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:09.627 11:19:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:09.627 ************************************ 00:14:09.627 END TEST nvmf_connect_disconnect 00:14:09.627 ************************************ 00:14:09.627 11:19:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:09.627 11:19:55 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:09.627 11:19:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:09.627 11:19:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.627 11:19:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.627 ************************************ 00:14:09.627 START TEST nvmf_multitarget 00:14:09.627 ************************************ 00:14:09.627 11:19:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:09.628 * Looking for test storage... 00:14:09.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.628 11:19:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.901 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:14.902 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:14.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:14.902 Found net devices under 0000:86:00.0: cvl_0_0 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:14.902 Found net devices under 0000:86:00.1: cvl_0_1 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.902 11:20:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:14:14.902 00:14:14.902 --- 10.0.0.2 ping statistics --- 00:14:14.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.902 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:14:14.902 00:14:14.902 --- 10.0.0.1 ping statistics --- 00:14:14.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.902 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=835764 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 835764 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 835764 ']' 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.902 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:14.902 [2024-07-12 11:20:01.177388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:14.902 [2024-07-12 11:20:01.177468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.902 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.162 [2024-07-12 11:20:01.291677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.421 [2024-07-12 11:20:01.520453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.421 [2024-07-12 11:20:01.520496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.421 [2024-07-12 11:20:01.520508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.421 [2024-07-12 11:20:01.520517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.421 [2024-07-12 11:20:01.520526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.421 [2024-07-12 11:20:01.520602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.421 [2024-07-12 11:20:01.520616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.421 [2024-07-12 11:20:01.520694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.421 [2024-07-12 11:20:01.520705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:15.680 11:20:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:15.938 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:15.938 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:15.938 "nvmf_tgt_1" 00:14:15.938 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:15.938 "nvmf_tgt_2" 00:14:15.938 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:16.196 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:16.196 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:16.196 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:16.196 true 00:14:16.196 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:16.455 true 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.455 rmmod nvme_tcp 00:14:16.455 rmmod nvme_fabrics 00:14:16.455 rmmod nvme_keyring 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 835764 ']' 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 835764 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 835764 ']' 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 835764 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835764 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835764' 00:14:16.455 killing process with pid 835764 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 835764 00:14:16.455 11:20:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 835764 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.833 11:20:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.367 11:20:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.367 00:14:20.367 real 0m10.750s 00:14:20.367 user 0m11.693s 00:14:20.367 sys 0m4.709s 00:14:20.367 11:20:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.367 11:20:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:20.367 ************************************ 00:14:20.367 END TEST nvmf_multitarget 00:14:20.367 ************************************ 00:14:20.367 11:20:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:20.367 11:20:06 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:20.367 11:20:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:20.367 11:20:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.367 11:20:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.367 ************************************ 00:14:20.367 START TEST nvmf_rpc 00:14:20.367 ************************************ 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:20.367 * Looking for test storage... 00:14:20.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.367 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:20.368 11:20:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:25.642 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:25.642 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:25.642 Found net devices under 0000:86:00.0: cvl_0_0 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:25.642 Found net devices under 0000:86:00.1: cvl_0_1 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:14:25.642 00:14:25.642 --- 10.0.0.2 ping statistics --- 00:14:25.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.642 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:14:25.642 00:14:25.642 --- 10.0.0.1 ping statistics --- 00:14:25.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.642 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.642 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=839763 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 839763 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 839763 ']' 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.643 11:20:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.902 [2024-07-12 11:20:12.005255] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:25.902 [2024-07-12 11:20:12.005343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.902 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.902 [2024-07-12 11:20:12.113150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.160 [2024-07-12 11:20:12.328031] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.160 [2024-07-12 11:20:12.328079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.160 [2024-07-12 11:20:12.328091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.160 [2024-07-12 11:20:12.328100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.160 [2024-07-12 11:20:12.328109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.160 [2024-07-12 11:20:12.328237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.160 [2024-07-12 11:20:12.328313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.160 [2024-07-12 11:20:12.328383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.160 [2024-07-12 11:20:12.328399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.728 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:26.728 "tick_rate": 2300000000, 00:14:26.728 "poll_groups": [ 00:14:26.728 { 00:14:26.728 "name": "nvmf_tgt_poll_group_000", 00:14:26.728 "admin_qpairs": 0, 00:14:26.728 "io_qpairs": 0, 00:14:26.728 "current_admin_qpairs": 0, 00:14:26.728 "current_io_qpairs": 0, 00:14:26.728 "pending_bdev_io": 0, 00:14:26.728 "completed_nvme_io": 0, 00:14:26.728 "transports": [] 00:14:26.728 }, 00:14:26.728 { 00:14:26.728 "name": "nvmf_tgt_poll_group_001", 00:14:26.728 "admin_qpairs": 0, 00:14:26.728 "io_qpairs": 0, 00:14:26.728 "current_admin_qpairs": 0, 00:14:26.728 "current_io_qpairs": 0, 00:14:26.728 "pending_bdev_io": 0, 00:14:26.728 "completed_nvme_io": 0, 00:14:26.728 "transports": [] 00:14:26.728 }, 00:14:26.728 { 00:14:26.728 "name": "nvmf_tgt_poll_group_002", 00:14:26.728 "admin_qpairs": 0, 00:14:26.728 "io_qpairs": 0, 00:14:26.728 "current_admin_qpairs": 0, 00:14:26.728 "current_io_qpairs": 0, 00:14:26.728 "pending_bdev_io": 0, 00:14:26.728 "completed_nvme_io": 0, 00:14:26.728 "transports": [] 00:14:26.728 }, 00:14:26.728 { 00:14:26.728 "name": "nvmf_tgt_poll_group_003", 00:14:26.728 "admin_qpairs": 0, 00:14:26.728 "io_qpairs": 0, 00:14:26.729 "current_admin_qpairs": 0, 00:14:26.729 "current_io_qpairs": 0, 00:14:26.729 "pending_bdev_io": 0, 00:14:26.729 "completed_nvme_io": 0, 00:14:26.729 "transports": [] 00:14:26.729 } 00:14:26.729 ] 00:14:26.729 }' 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.729 [2024-07-12 11:20:12.932470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.729 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:26.729 "tick_rate": 2300000000, 00:14:26.729 "poll_groups": [ 00:14:26.729 { 00:14:26.729 "name": "nvmf_tgt_poll_group_000", 00:14:26.729 "admin_qpairs": 0, 00:14:26.729 "io_qpairs": 0, 00:14:26.729 "current_admin_qpairs": 0, 00:14:26.729 "current_io_qpairs": 0, 00:14:26.729 "pending_bdev_io": 0, 00:14:26.729 "completed_nvme_io": 0, 00:14:26.729 "transports": [ 00:14:26.729 { 00:14:26.729 "trtype": "TCP" 00:14:26.729 } 00:14:26.729 ] 00:14:26.729 }, 00:14:26.729 { 00:14:26.729 "name": "nvmf_tgt_poll_group_001", 00:14:26.729 "admin_qpairs": 0, 00:14:26.729 "io_qpairs": 0, 00:14:26.729 "current_admin_qpairs": 0, 00:14:26.729 "current_io_qpairs": 0, 00:14:26.730 "pending_bdev_io": 0, 00:14:26.730 "completed_nvme_io": 0, 00:14:26.730 "transports": [ 00:14:26.730 { 00:14:26.730 "trtype": "TCP" 00:14:26.730 } 00:14:26.730 ] 00:14:26.730 }, 00:14:26.730 { 00:14:26.730 "name": "nvmf_tgt_poll_group_002", 00:14:26.730 "admin_qpairs": 0, 00:14:26.730 "io_qpairs": 0, 00:14:26.730 "current_admin_qpairs": 0, 00:14:26.730 "current_io_qpairs": 0, 00:14:26.730 "pending_bdev_io": 0, 00:14:26.730 "completed_nvme_io": 0, 00:14:26.730 "transports": [ 00:14:26.730 { 00:14:26.730 "trtype": "TCP" 00:14:26.730 } 00:14:26.730 ] 00:14:26.730 }, 00:14:26.730 { 00:14:26.730 "name": "nvmf_tgt_poll_group_003", 00:14:26.730 "admin_qpairs": 0, 00:14:26.730 "io_qpairs": 0, 00:14:26.730 "current_admin_qpairs": 0, 00:14:26.730 "current_io_qpairs": 0, 00:14:26.730 "pending_bdev_io": 0, 00:14:26.730 "completed_nvme_io": 0, 00:14:26.730 "transports": [ 00:14:26.730 { 00:14:26.730 "trtype": "TCP" 00:14:26.730 } 00:14:26.730 ] 00:14:26.730 } 00:14:26.730 ] 00:14:26.730 }' 00:14:26.730 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:26.730 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:26.730 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:26.730 11:20:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.730 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 Malloc1 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 [2024-07-12 11:20:13.175514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:26.990 [2024-07-12 11:20:13.200948] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:14:26.990 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:26.990 could not add new controller: failed to write to nvme-fabrics device 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.990 11:20:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:28.367 11:20:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.367 11:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:28.367 11:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.367 11:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:28.367 11:20:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:30.390 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.391 [2024-07-12 11:20:16.682543] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:14:30.391 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:30.391 could not add new controller: failed to write to nvme-fabrics device 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.391 11:20:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:31.893 11:20:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:31.893 11:20:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:31.893 11:20:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.893 11:20:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:31.893 11:20:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:33.817 11:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:33.817 11:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:33.817 11:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.817 11:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:33.817 11:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.817 11:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:33.817 11:20:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.817 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.817 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:33.817 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:33.817 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.077 [2024-07-12 11:20:20.228421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.077 11:20:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:35.458 11:20:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:35.458 11:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:35.458 11:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.458 11:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:35.458 11:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.379 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.639 [2024-07-12 11:20:23.750705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.639 11:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:39.016 11:20:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.016 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:39.016 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.016 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:39.016 11:20:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:40.923 11:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:40.923 11:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:40.923 11:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.923 11:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:40.923 11:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.923 11:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:40.923 11:20:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.923 [2024-07-12 11:20:27.269506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.923 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.182 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.182 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.182 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.182 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.182 11:20:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.182 11:20:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.120 11:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.120 11:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.120 11:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.120 11:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:42.120 11:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.657 [2024-07-12 11:20:30.797509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.657 11:20:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.035 11:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.035 11:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.035 11:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.035 11:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:46.035 11:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:47.943 11:20:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:47.943 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.203 [2024-07-12 11:20:34.354554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.203 11:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.604 11:20:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.604 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.604 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.604 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:49.604 11:20:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.508 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 [2024-07-12 11:20:37.873970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 [2024-07-12 11:20:37.922089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 [2024-07-12 11:20:37.974257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 [2024-07-12 11:20:38.022422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.768 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 [2024-07-12 11:20:38.070596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.769 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:52.028 "tick_rate": 2300000000, 00:14:52.028 "poll_groups": [ 00:14:52.028 { 00:14:52.028 "name": "nvmf_tgt_poll_group_000", 00:14:52.028 "admin_qpairs": 2, 00:14:52.028 "io_qpairs": 168, 00:14:52.028 "current_admin_qpairs": 0, 00:14:52.028 "current_io_qpairs": 0, 00:14:52.028 "pending_bdev_io": 0, 00:14:52.028 "completed_nvme_io": 269, 00:14:52.028 "transports": [ 00:14:52.028 { 00:14:52.028 "trtype": "TCP" 00:14:52.028 } 00:14:52.028 ] 00:14:52.028 }, 00:14:52.028 { 00:14:52.028 "name": "nvmf_tgt_poll_group_001", 00:14:52.028 "admin_qpairs": 2, 00:14:52.028 "io_qpairs": 168, 00:14:52.028 "current_admin_qpairs": 0, 00:14:52.028 "current_io_qpairs": 0, 00:14:52.028 "pending_bdev_io": 0, 00:14:52.028 "completed_nvme_io": 269, 00:14:52.028 "transports": [ 00:14:52.028 { 00:14:52.028 "trtype": "TCP" 00:14:52.028 } 00:14:52.028 ] 00:14:52.028 }, 00:14:52.028 { 00:14:52.028 "name": "nvmf_tgt_poll_group_002", 00:14:52.028 "admin_qpairs": 1, 00:14:52.028 "io_qpairs": 168, 00:14:52.028 "current_admin_qpairs": 0, 00:14:52.028 "current_io_qpairs": 0, 00:14:52.028 "pending_bdev_io": 0, 00:14:52.028 "completed_nvme_io": 168, 00:14:52.028 "transports": [ 00:14:52.028 { 00:14:52.028 "trtype": "TCP" 00:14:52.028 } 00:14:52.028 ] 00:14:52.028 }, 00:14:52.028 { 00:14:52.028 "name": "nvmf_tgt_poll_group_003", 00:14:52.028 "admin_qpairs": 2, 00:14:52.028 "io_qpairs": 168, 00:14:52.028 "current_admin_qpairs": 0, 00:14:52.028 "current_io_qpairs": 0, 00:14:52.028 "pending_bdev_io": 0, 00:14:52.028 "completed_nvme_io": 316, 00:14:52.028 "transports": [ 00:14:52.028 { 00:14:52.028 "trtype": "TCP" 00:14:52.028 } 00:14:52.028 ] 00:14:52.028 } 00:14:52.028 ] 00:14:52.028 }' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.028 rmmod nvme_tcp 00:14:52.028 rmmod nvme_fabrics 00:14:52.028 rmmod nvme_keyring 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 839763 ']' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 839763 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 839763 ']' 00:14:52.028 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 839763 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 839763 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 839763' 00:14:52.029 killing process with pid 839763 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 839763 00:14:52.029 11:20:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 839763 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.929 11:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.832 11:20:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.832 00:14:55.832 real 0m35.688s 00:14:55.832 user 1m49.324s 00:14:55.832 sys 0m6.211s 00:14:55.832 11:20:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.832 11:20:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.832 ************************************ 00:14:55.832 END TEST nvmf_rpc 00:14:55.832 ************************************ 00:14:55.832 11:20:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:55.832 11:20:41 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:55.832 11:20:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:55.832 11:20:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.832 11:20:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.832 ************************************ 00:14:55.832 START TEST nvmf_invalid 00:14:55.832 ************************************ 00:14:55.832 11:20:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:55.832 * Looking for test storage... 00:14:55.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.832 11:20:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.833 11:20:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:01.107 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:01.367 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:01.367 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:01.367 Found net devices under 0000:86:00.0: cvl_0_0 00:15:01.367 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:01.368 Found net devices under 0000:86:00.1: cvl_0_1 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:15:01.368 00:15:01.368 --- 10.0.0.2 ping statistics --- 00:15:01.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.368 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:15:01.368 00:15:01.368 --- 10.0.0.1 ping statistics --- 00:15:01.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.368 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.368 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=847929 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 847929 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 847929 ']' 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.627 11:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:01.627 [2024-07-12 11:20:47.833047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:01.627 [2024-07-12 11:20:47.833150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.627 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.627 [2024-07-12 11:20:47.941658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.885 [2024-07-12 11:20:48.152475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.885 [2024-07-12 11:20:48.152522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.885 [2024-07-12 11:20:48.152533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.885 [2024-07-12 11:20:48.152542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.885 [2024-07-12 11:20:48.152551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.885 [2024-07-12 11:20:48.152675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.885 [2024-07-12 11:20:48.152748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.885 [2024-07-12 11:20:48.152807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.885 [2024-07-12 11:20:48.152817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:02.451 11:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5779 00:15:02.451 [2024-07-12 11:20:48.800393] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:02.709 11:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:02.709 { 00:15:02.709 "nqn": "nqn.2016-06.io.spdk:cnode5779", 00:15:02.709 "tgt_name": "foobar", 00:15:02.709 "method": "nvmf_create_subsystem", 00:15:02.709 "req_id": 1 00:15:02.709 } 00:15:02.709 Got JSON-RPC error response 00:15:02.709 response: 00:15:02.709 { 00:15:02.709 "code": -32603, 00:15:02.709 "message": "Unable to find target foobar" 00:15:02.709 }' 00:15:02.709 11:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:02.709 { 00:15:02.709 "nqn": "nqn.2016-06.io.spdk:cnode5779", 00:15:02.709 "tgt_name": "foobar", 00:15:02.709 "method": "nvmf_create_subsystem", 00:15:02.709 "req_id": 1 00:15:02.709 } 00:15:02.709 Got JSON-RPC error response 00:15:02.709 response: 00:15:02.709 { 00:15:02.709 "code": -32603, 00:15:02.709 "message": "Unable to find target foobar" 00:15:02.709 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:02.709 11:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:02.709 11:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22135 00:15:02.709 [2024-07-12 11:20:48.989069] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22135: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:02.709 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:02.709 { 00:15:02.709 "nqn": "nqn.2016-06.io.spdk:cnode22135", 00:15:02.709 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:02.709 "method": "nvmf_create_subsystem", 00:15:02.709 "req_id": 1 00:15:02.709 } 00:15:02.709 Got JSON-RPC error response 00:15:02.709 response: 00:15:02.709 { 00:15:02.709 "code": -32602, 00:15:02.709 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:02.709 }' 00:15:02.709 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:02.709 { 00:15:02.709 "nqn": "nqn.2016-06.io.spdk:cnode22135", 00:15:02.709 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:02.709 "method": "nvmf_create_subsystem", 00:15:02.709 "req_id": 1 00:15:02.709 } 00:15:02.709 Got JSON-RPC error response 00:15:02.709 response: 00:15:02.709 { 00:15:02.709 "code": -32602, 00:15:02.709 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:02.709 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:02.709 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:02.709 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20304 00:15:02.966 [2024-07-12 11:20:49.185754] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20304: invalid model number 'SPDK_Controller' 00:15:02.966 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:02.966 { 00:15:02.966 "nqn": "nqn.2016-06.io.spdk:cnode20304", 00:15:02.967 "model_number": "SPDK_Controller\u001f", 00:15:02.967 "method": "nvmf_create_subsystem", 00:15:02.967 "req_id": 1 00:15:02.967 } 00:15:02.967 Got JSON-RPC error response 00:15:02.967 response: 00:15:02.967 { 00:15:02.967 "code": -32602, 00:15:02.967 "message": "Invalid MN SPDK_Controller\u001f" 00:15:02.967 }' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:02.967 { 00:15:02.967 "nqn": "nqn.2016-06.io.spdk:cnode20304", 00:15:02.967 "model_number": "SPDK_Controller\u001f", 00:15:02.967 "method": "nvmf_create_subsystem", 00:15:02.967 "req_id": 1 00:15:02.967 } 00:15:02.967 Got JSON-RPC error response 00:15:02.967 response: 00:15:02.967 { 00:15:02.967 "code": -32602, 00:15:02.967 "message": "Invalid MN SPDK_Controller\u001f" 00:15:02.967 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:02.967 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:15:03.225 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '^\3CilC6p{gu'\''|5^vJ;jS' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '^\3CilC6p{gu'\''|5^vJ;jS' nqn.2016-06.io.spdk:cnode23843 00:15:03.226 [2024-07-12 11:20:49.506892] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23843: invalid serial number '^\3CilC6p{gu'|5^vJ;jS' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:03.226 { 00:15:03.226 "nqn": "nqn.2016-06.io.spdk:cnode23843", 00:15:03.226 "serial_number": "^\\3CilC6p{gu'\''|5^vJ;jS", 00:15:03.226 "method": "nvmf_create_subsystem", 00:15:03.226 "req_id": 1 00:15:03.226 } 00:15:03.226 Got JSON-RPC error response 00:15:03.226 response: 00:15:03.226 { 00:15:03.226 "code": -32602, 00:15:03.226 "message": "Invalid SN ^\\3CilC6p{gu'\''|5^vJ;jS" 00:15:03.226 }' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:03.226 { 00:15:03.226 "nqn": "nqn.2016-06.io.spdk:cnode23843", 00:15:03.226 "serial_number": "^\\3CilC6p{gu'|5^vJ;jS", 00:15:03.226 "method": "nvmf_create_subsystem", 00:15:03.226 "req_id": 1 00:15:03.226 } 00:15:03.226 Got JSON-RPC error response 00:15:03.226 response: 00:15:03.226 { 00:15:03.226 "code": -32602, 00:15:03.226 "message": "Invalid SN ^\\3CilC6p{gu'|5^vJ;jS" 00:15:03.226 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:03.226 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.485 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@29 -- # string='\-D7zHY{Px\";rpZN_14xL#3"4`GSxGhWDA4CrR"|"' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-D7zHY{Px\";rpZN_14xL#3"4`GSxGhWDA4CrR"|"' 00:15:03.486 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\-D7zHY{Px\";rpZN_14xL#3"4`GSxGhWDA4CrR"|"' nqn.2016-06.io.spdk:cnode21909 00:15:03.745 [2024-07-12 11:20:49.936372] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21909: invalid model number '\-D7zHY{Px\";rpZN_14xL#3"4`GSxGhWDA4CrR"|"' 00:15:03.745 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:03.745 { 00:15:03.745 "nqn": "nqn.2016-06.io.spdk:cnode21909", 00:15:03.745 "model_number": "\\-D7zHY{Px\\\";rpZN_14xL#3\"4`GSxGhWDA4CrR\"|\"", 00:15:03.745 "method": "nvmf_create_subsystem", 00:15:03.745 "req_id": 1 00:15:03.745 } 00:15:03.745 Got JSON-RPC error response 00:15:03.745 response: 00:15:03.745 { 00:15:03.745 "code": -32602, 00:15:03.745 "message": "Invalid MN \\-D7zHY{Px\\\";rpZN_14xL#3\"4`GSxGhWDA4CrR\"|\"" 00:15:03.745 }' 00:15:03.745 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:03.745 { 00:15:03.745 "nqn": "nqn.2016-06.io.spdk:cnode21909", 00:15:03.745 "model_number": "\\-D7zHY{Px\\\";rpZN_14xL#3\"4`GSxGhWDA4CrR\"|\"", 00:15:03.745 "method": "nvmf_create_subsystem", 00:15:03.745 "req_id": 1 00:15:03.745 } 00:15:03.745 Got JSON-RPC error response 00:15:03.745 response: 00:15:03.745 { 00:15:03.745 "code": -32602, 00:15:03.745 "message": "Invalid MN \\-D7zHY{Px\\\";rpZN_14xL#3\"4`GSxGhWDA4CrR\"|\"" 00:15:03.745 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:03.745 11:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:04.002 [2024-07-12 11:20:50.125148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.002 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:04.002 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:04.002 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:04.002 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:04.002 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:04.002 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:04.261 [2024-07-12 11:20:50.514503] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:04.261 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:04.261 { 00:15:04.261 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:04.261 "listen_address": { 00:15:04.261 "trtype": "tcp", 00:15:04.261 "traddr": "", 00:15:04.261 "trsvcid": "4421" 00:15:04.261 }, 00:15:04.261 "method": "nvmf_subsystem_remove_listener", 00:15:04.261 "req_id": 1 00:15:04.261 } 00:15:04.261 Got JSON-RPC error response 00:15:04.261 response: 00:15:04.261 { 00:15:04.261 "code": -32602, 00:15:04.261 "message": "Invalid parameters" 00:15:04.261 }' 00:15:04.261 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:04.261 { 00:15:04.261 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:04.261 "listen_address": { 00:15:04.261 "trtype": "tcp", 00:15:04.261 "traddr": "", 00:15:04.261 "trsvcid": "4421" 00:15:04.261 }, 00:15:04.261 "method": "nvmf_subsystem_remove_listener", 00:15:04.261 "req_id": 1 00:15:04.261 } 00:15:04.261 Got JSON-RPC error response 00:15:04.261 response: 00:15:04.261 { 00:15:04.261 "code": -32602, 00:15:04.261 "message": "Invalid parameters" 00:15:04.261 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:04.261 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26205 -i 0 00:15:04.520 [2024-07-12 11:20:50.711145] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26205: invalid cntlid range [0-65519] 00:15:04.520 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:04.520 { 00:15:04.520 "nqn": "nqn.2016-06.io.spdk:cnode26205", 00:15:04.520 "min_cntlid": 0, 00:15:04.520 "method": "nvmf_create_subsystem", 00:15:04.520 "req_id": 1 00:15:04.520 } 00:15:04.520 Got JSON-RPC error response 00:15:04.520 response: 00:15:04.520 { 00:15:04.520 "code": -32602, 00:15:04.520 "message": "Invalid cntlid range [0-65519]" 00:15:04.520 }' 00:15:04.520 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:04.520 { 00:15:04.520 "nqn": "nqn.2016-06.io.spdk:cnode26205", 00:15:04.520 "min_cntlid": 0, 00:15:04.520 "method": "nvmf_create_subsystem", 00:15:04.520 "req_id": 1 00:15:04.520 } 00:15:04.520 Got JSON-RPC error response 00:15:04.520 response: 00:15:04.520 { 00:15:04.520 "code": -32602, 00:15:04.520 "message": "Invalid cntlid range [0-65519]" 00:15:04.520 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:04.520 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17188 -i 65520 00:15:04.779 [2024-07-12 11:20:50.891766] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17188: invalid cntlid range [65520-65519] 00:15:04.779 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:04.779 { 00:15:04.779 "nqn": "nqn.2016-06.io.spdk:cnode17188", 00:15:04.779 "min_cntlid": 65520, 00:15:04.779 "method": "nvmf_create_subsystem", 00:15:04.779 "req_id": 1 00:15:04.779 } 00:15:04.779 Got JSON-RPC error response 00:15:04.779 response: 00:15:04.779 { 00:15:04.779 "code": -32602, 00:15:04.779 "message": "Invalid cntlid range [65520-65519]" 00:15:04.779 }' 00:15:04.779 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:04.779 { 00:15:04.779 "nqn": "nqn.2016-06.io.spdk:cnode17188", 00:15:04.779 "min_cntlid": 65520, 00:15:04.779 "method": "nvmf_create_subsystem", 00:15:04.779 "req_id": 1 00:15:04.779 } 00:15:04.779 Got JSON-RPC error response 00:15:04.779 response: 00:15:04.779 { 00:15:04.779 "code": -32602, 00:15:04.779 "message": "Invalid cntlid range [65520-65519]" 00:15:04.779 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:04.779 11:20:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13697 -I 0 00:15:04.779 [2024-07-12 11:20:51.080403] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13697: invalid cntlid range [1-0] 00:15:04.779 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:04.779 { 00:15:04.779 "nqn": "nqn.2016-06.io.spdk:cnode13697", 00:15:04.779 "max_cntlid": 0, 00:15:04.779 "method": "nvmf_create_subsystem", 00:15:04.779 "req_id": 1 00:15:04.779 } 00:15:04.779 Got JSON-RPC error response 00:15:04.779 response: 00:15:04.779 { 00:15:04.779 "code": -32602, 00:15:04.779 "message": "Invalid cntlid range [1-0]" 00:15:04.779 }' 00:15:04.779 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:04.779 { 00:15:04.779 "nqn": "nqn.2016-06.io.spdk:cnode13697", 00:15:04.779 "max_cntlid": 0, 00:15:04.779 "method": "nvmf_create_subsystem", 00:15:04.779 "req_id": 1 00:15:04.779 } 00:15:04.779 Got JSON-RPC error response 00:15:04.779 response: 00:15:04.779 { 00:15:04.779 "code": -32602, 00:15:04.779 "message": "Invalid cntlid range [1-0]" 00:15:04.779 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:04.779 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27281 -I 65520 00:15:05.037 [2024-07-12 11:20:51.269054] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27281: invalid cntlid range [1-65520] 00:15:05.037 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:05.037 { 00:15:05.037 "nqn": "nqn.2016-06.io.spdk:cnode27281", 00:15:05.037 "max_cntlid": 65520, 00:15:05.037 "method": "nvmf_create_subsystem", 00:15:05.037 "req_id": 1 00:15:05.037 } 00:15:05.037 Got JSON-RPC error response 00:15:05.037 response: 00:15:05.037 { 00:15:05.037 "code": -32602, 00:15:05.037 "message": "Invalid cntlid range [1-65520]" 00:15:05.037 }' 00:15:05.037 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:05.037 { 00:15:05.037 "nqn": "nqn.2016-06.io.spdk:cnode27281", 00:15:05.037 "max_cntlid": 65520, 00:15:05.037 "method": "nvmf_create_subsystem", 00:15:05.037 "req_id": 1 00:15:05.037 } 00:15:05.037 Got JSON-RPC error response 00:15:05.037 response: 00:15:05.037 { 00:15:05.037 "code": -32602, 00:15:05.037 "message": "Invalid cntlid range [1-65520]" 00:15:05.037 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:05.037 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28537 -i 6 -I 5 00:15:05.296 [2024-07-12 11:20:51.449673] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28537: invalid cntlid range [6-5] 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:05.296 { 00:15:05.296 "nqn": "nqn.2016-06.io.spdk:cnode28537", 00:15:05.296 "min_cntlid": 6, 00:15:05.296 "max_cntlid": 5, 00:15:05.296 "method": "nvmf_create_subsystem", 00:15:05.296 "req_id": 1 00:15:05.296 } 00:15:05.296 Got JSON-RPC error response 00:15:05.296 response: 00:15:05.296 { 00:15:05.296 "code": -32602, 00:15:05.296 "message": "Invalid cntlid range [6-5]" 00:15:05.296 }' 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:05.296 { 00:15:05.296 "nqn": "nqn.2016-06.io.spdk:cnode28537", 00:15:05.296 "min_cntlid": 6, 00:15:05.296 "max_cntlid": 5, 00:15:05.296 "method": "nvmf_create_subsystem", 00:15:05.296 "req_id": 1 00:15:05.296 } 00:15:05.296 Got JSON-RPC error response 00:15:05.296 response: 00:15:05.296 { 00:15:05.296 "code": -32602, 00:15:05.296 "message": "Invalid cntlid range [6-5]" 00:15:05.296 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:05.296 { 00:15:05.296 "name": "foobar", 00:15:05.296 "method": "nvmf_delete_target", 00:15:05.296 "req_id": 1 00:15:05.296 } 00:15:05.296 Got JSON-RPC error response 00:15:05.296 response: 00:15:05.296 { 00:15:05.296 "code": -32602, 00:15:05.296 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:05.296 }' 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:05.296 { 00:15:05.296 "name": "foobar", 00:15:05.296 "method": "nvmf_delete_target", 00:15:05.296 "req_id": 1 00:15:05.296 } 00:15:05.296 Got JSON-RPC error response 00:15:05.296 response: 00:15:05.296 { 00:15:05.296 "code": -32602, 00:15:05.296 "message": "The specified target doesn't exist, cannot delete it." 00:15:05.296 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.296 rmmod nvme_tcp 00:15:05.296 rmmod nvme_fabrics 00:15:05.296 rmmod nvme_keyring 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 847929 ']' 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 847929 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 847929 ']' 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 847929 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.296 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 847929 00:15:05.555 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:05.555 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:05.555 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 847929' 00:15:05.555 killing process with pid 847929 00:15:05.555 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 847929 00:15:05.555 11:20:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 847929 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.931 11:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.836 11:20:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:08.836 00:15:08.836 real 0m13.061s 00:15:08.836 user 0m22.196s 00:15:08.836 sys 0m5.231s 00:15:08.836 11:20:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.836 11:20:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:08.836 ************************************ 00:15:08.836 END TEST nvmf_invalid 00:15:08.836 ************************************ 00:15:08.836 11:20:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:08.836 11:20:55 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:08.836 11:20:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.836 11:20:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.836 11:20:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.836 ************************************ 00:15:08.836 START TEST nvmf_abort 00:15:08.836 ************************************ 00:15:08.836 11:20:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:09.095 * Looking for test storage... 00:15:09.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.095 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.096 11:20:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:14.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:14.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:14.374 Found net devices under 0000:86:00.0: cvl_0_0 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:14.374 Found net devices under 0000:86:00.1: cvl_0_1 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:14.374 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.375 11:20:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:14.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:15:14.375 00:15:14.375 --- 10.0.0.2 ping statistics --- 00:15:14.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.375 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:15:14.375 00:15:14.375 --- 10.0.0.1 ping statistics --- 00:15:14.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.375 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=852255 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 852255 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 852255 ']' 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:14.375 11:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.375 [2024-07-12 11:21:00.249227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:14.375 [2024-07-12 11:21:00.249314] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.375 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.375 [2024-07-12 11:21:00.359388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.375 [2024-07-12 11:21:00.571745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.375 [2024-07-12 11:21:00.571791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.375 [2024-07-12 11:21:00.571806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.375 [2024-07-12 11:21:00.571815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.375 [2024-07-12 11:21:00.571824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.375 [2024-07-12 11:21:00.571953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.375 [2024-07-12 11:21:00.572015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.375 [2024-07-12 11:21:00.572025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 [2024-07-12 11:21:01.071563] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 Malloc0 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 Delay0 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 [2024-07-12 11:21:01.200613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.942 11:21:01 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:14.942 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.201 [2024-07-12 11:21:01.389547] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:17.736 Initializing NVMe Controllers 00:15:17.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:17.736 controller IO queue size 128 less than required 00:15:17.736 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:17.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:17.736 Initialization complete. Launching workers. 00:15:17.736 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 39141 00:15:17.736 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39202, failed to submit 66 00:15:17.736 success 39141, unsuccess 61, failed 0 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.736 rmmod nvme_tcp 00:15:17.736 rmmod nvme_fabrics 00:15:17.736 rmmod nvme_keyring 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 852255 ']' 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 852255 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 852255 ']' 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 852255 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 852255 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 852255' 00:15:17.736 killing process with pid 852255 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 852255 00:15:17.736 11:21:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 852255 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.112 11:21:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.014 11:21:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:21.014 00:15:21.014 real 0m12.170s 00:15:21.014 user 0m16.355s 00:15:21.014 sys 0m4.747s 00:15:21.014 11:21:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.014 11:21:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:21.014 ************************************ 00:15:21.014 END TEST nvmf_abort 00:15:21.014 ************************************ 00:15:21.014 11:21:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:21.014 11:21:07 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:21.014 11:21:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.014 11:21:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.014 11:21:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.014 ************************************ 00:15:21.014 START TEST nvmf_ns_hotplug_stress 00:15:21.014 ************************************ 00:15:21.014 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:21.273 * Looking for test storage... 00:15:21.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.273 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.274 11:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:26.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:26.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:26.545 Found net devices under 0000:86:00.0: cvl_0_0 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:26.545 Found net devices under 0000:86:00.1: cvl_0_1 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:15:26.545 00:15:26.545 --- 10.0.0.2 ping statistics --- 00:15:26.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.545 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:15:26.545 00:15:26.545 --- 10.0.0.1 ping statistics --- 00:15:26.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.545 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.545 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=856980 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 856980 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 856980 ']' 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.546 11:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.546 [2024-07-12 11:21:12.654648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:26.546 [2024-07-12 11:21:12.654736] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.546 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.546 [2024-07-12 11:21:12.761742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.804 [2024-07-12 11:21:12.974033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.804 [2024-07-12 11:21:12.974077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.804 [2024-07-12 11:21:12.974091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.804 [2024-07-12 11:21:12.974099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.804 [2024-07-12 11:21:12.974108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.804 [2024-07-12 11:21:12.974230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.804 [2024-07-12 11:21:12.974289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.804 [2024-07-12 11:21:12.974299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:27.372 [2024-07-12 11:21:13.624868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.372 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:27.632 11:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.891 [2024-07-12 11:21:14.000815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.891 11:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.891 11:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:28.150 Malloc0 00:15:28.150 11:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:28.409 Delay0 00:15:28.409 11:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.668 11:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:28.668 NULL1 00:15:28.668 11:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:28.927 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=857449 00:15:28.927 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:28.927 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:28.927 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.927 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.212 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.212 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:29.212 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:29.471 true 00:15:29.471 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:29.471 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.729 11:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.087 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:30.087 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:30.087 true 00:15:30.087 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:30.087 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.356 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.356 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:30.356 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:30.614 true 00:15:30.614 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:30.614 11:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.872 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.129 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:31.129 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:31.129 true 00:15:31.129 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:31.129 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.388 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.647 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:31.647 11:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:31.647 true 00:15:31.905 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:31.905 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.905 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.163 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:32.163 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:32.421 true 00:15:32.421 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:32.421 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.679 11:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.679 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:32.679 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:32.937 true 00:15:32.937 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:32.937 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.195 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.454 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:33.454 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:33.454 true 00:15:33.454 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:33.454 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.712 11:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.970 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:33.970 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:34.229 true 00:15:34.229 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:34.229 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.229 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.488 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:34.488 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:34.747 true 00:15:34.747 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:34.747 11:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.006 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.006 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:35.006 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:35.264 true 00:15:35.264 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:35.264 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.523 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.782 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:35.782 11:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:35.782 true 00:15:35.782 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:35.782 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.041 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.300 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:36.300 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:36.300 true 00:15:36.559 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:36.559 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.559 11:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.818 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:36.818 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:37.078 true 00:15:37.078 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:37.078 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.078 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.336 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:37.336 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:37.594 true 00:15:37.594 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:37.594 11:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.853 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.112 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:38.112 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:38.112 true 00:15:38.112 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:38.112 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.370 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.629 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:38.629 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:38.629 true 00:15:38.629 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:38.629 11:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.887 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.145 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:39.145 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:39.403 true 00:15:39.403 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:39.403 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.403 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:39.662 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:39.662 11:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:39.921 true 00:15:39.921 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:39.921 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.180 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.439 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:40.439 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:40.439 true 00:15:40.439 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:40.439 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.697 11:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.956 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:40.956 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:40.956 true 00:15:41.214 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:41.214 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.214 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.473 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:41.473 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:41.732 true 00:15:41.732 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:41.732 11:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.732 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.991 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:41.991 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:42.250 true 00:15:42.250 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:42.250 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.509 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.509 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:42.509 11:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:42.767 true 00:15:42.767 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:42.767 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.024 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.283 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:43.283 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:43.283 true 00:15:43.283 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:43.283 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.541 11:21:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.800 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:43.800 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:44.059 true 00:15:44.059 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:44.059 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.059 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.318 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:44.318 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:44.578 true 00:15:44.578 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:44.578 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.837 11:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.837 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:44.837 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:45.095 true 00:15:45.095 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:45.095 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.353 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.611 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:45.611 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:45.611 true 00:15:45.611 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:45.611 11:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.869 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.127 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:46.127 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:46.384 true 00:15:46.384 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:46.384 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.384 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.642 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:15:46.642 11:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:15:46.901 true 00:15:46.901 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:46.901 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.159 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.159 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:15:47.159 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:15:47.417 true 00:15:47.417 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:47.417 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.674 11:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.933 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:15:47.933 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:15:47.933 true 00:15:47.933 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:47.933 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.191 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.450 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:15:48.450 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:15:48.708 true 00:15:48.708 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:48.708 11:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.708 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.966 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:15:48.966 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:15:49.225 true 00:15:49.225 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:49.225 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.484 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.484 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:15:49.484 11:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:15:49.742 true 00:15:49.743 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:49.743 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.001 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.259 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:15:50.259 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:15:50.259 true 00:15:50.259 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:50.259 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.517 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.776 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:15:50.776 11:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:15:51.034 true 00:15:51.034 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:51.034 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.034 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.349 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:15:51.349 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:15:51.608 true 00:15:51.608 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:51.608 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.608 11:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.866 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:15:51.866 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:15:52.125 true 00:15:52.125 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:52.125 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.411 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.411 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:15:52.411 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:15:52.669 true 00:15:52.669 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:52.669 11:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.927 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.186 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:15:53.186 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:15:53.186 true 00:15:53.186 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:53.186 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.444 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.703 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:15:53.703 11:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:15:53.703 true 00:15:53.962 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:53.962 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.962 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.220 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:15:54.220 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:15:54.479 true 00:15:54.479 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:54.479 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.737 11:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.737 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:15:54.737 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:15:54.995 true 00:15:54.995 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:54.995 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.254 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.513 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:15:55.513 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:15:55.513 true 00:15:55.513 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:55.513 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.771 11:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:56.029 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:15:56.030 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:15:56.030 true 00:15:56.288 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:56.288 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.288 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:56.546 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:15:56.546 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:15:56.813 true 00:15:56.813 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:56.813 11:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.078 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.078 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:15:57.078 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:15:57.337 true 00:15:57.337 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:57.337 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.595 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.854 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:15:57.854 11:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:15:57.854 true 00:15:57.854 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:57.854 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.113 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.372 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:15:58.372 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:15:58.372 true 00:15:58.372 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:58.372 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.631 11:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.890 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:15:58.890 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:15:58.890 true 00:15:59.148 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:59.148 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.148 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.407 Initializing NVMe Controllers 00:15:59.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.407 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:15:59.407 Controller IO queue size 128, less than required. 00:15:59.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:59.407 WARNING: Some requested NVMe devices were skipped 00:15:59.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:59.407 Initialization complete. Launching workers. 00:15:59.407 ======================================================== 00:15:59.407 Latency(us) 00:15:59.407 Device Information : IOPS MiB/s Average min max 00:15:59.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23206.92 11.33 5515.68 2689.15 11778.05 00:15:59.407 ======================================================== 00:15:59.407 Total : 23206.92 11.33 5515.68 2689.15 11778.05 00:15:59.407 00:15:59.407 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:15:59.407 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:15:59.666 true 00:15:59.666 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857449 00:15:59.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (857449) - No such process 00:15:59.666 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 857449 00:15:59.666 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.666 11:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.925 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:59.925 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:59.925 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:59.925 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:59.925 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:00.184 null0 00:16:00.184 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:00.184 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:00.184 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:00.184 null1 00:16:00.184 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:00.184 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:00.184 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:00.442 null2 00:16:00.442 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:00.442 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:00.442 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:00.701 null3 00:16:00.701 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:00.701 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:00.701 11:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:00.701 null4 00:16:00.701 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:00.701 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:00.701 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:00.960 null5 00:16:00.960 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:00.960 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:00.960 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:01.220 null6 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:01.220 null7 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 862962 862965 862967 862972 862974 862977 862980 862983 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:01.220 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.221 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:01.480 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.739 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.740 11:21:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:01.998 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:02.256 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:02.256 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:02.256 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:02.256 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.257 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:02.257 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:02.257 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:02.257 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:02.515 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:02.774 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:02.774 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.774 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:02.774 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:02.774 11:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.774 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.775 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:02.775 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:02.775 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:02.775 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:02.775 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:03.034 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:03.293 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.552 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:03.810 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:03.810 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:03.810 11:21:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:03.810 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:04.069 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.329 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.588 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:04.848 11:21:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:04.848 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.107 rmmod nvme_tcp 00:16:05.107 rmmod nvme_fabrics 00:16:05.107 rmmod nvme_keyring 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 856980 ']' 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 856980 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 856980 ']' 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 856980 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856980 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856980' 00:16:05.107 killing process with pid 856980 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 856980 00:16:05.107 11:21:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 856980 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.012 11:21:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.918 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.918 00:16:08.918 real 0m47.569s 00:16:08.918 user 3m20.217s 00:16:08.918 sys 0m15.990s 00:16:08.918 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.918 11:21:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.918 ************************************ 00:16:08.918 END TEST nvmf_ns_hotplug_stress 00:16:08.918 ************************************ 00:16:08.918 11:21:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:08.918 11:21:54 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:08.918 11:21:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.918 11:21:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.918 11:21:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.918 ************************************ 00:16:08.918 START TEST nvmf_connect_stress 00:16:08.918 ************************************ 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:08.918 * Looking for test storage... 00:16:08.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.918 11:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:14.193 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:14.193 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.193 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:14.194 Found net devices under 0000:86:00.0: cvl_0_0 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:14.194 Found net devices under 0000:86:00.1: cvl_0_1 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:14.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:16:14.194 00:16:14.194 --- 10.0.0.2 ping statistics --- 00:16:14.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.194 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:16:14.194 00:16:14.194 --- 10.0.0.1 ping statistics --- 00:16:14.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.194 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=867463 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 867463 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 867463 ']' 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.194 11:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.454 [2024-07-12 11:22:00.557371] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:14.454 [2024-07-12 11:22:00.557465] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.454 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.454 [2024-07-12 11:22:00.667094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:14.714 [2024-07-12 11:22:00.883063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.714 [2024-07-12 11:22:00.883100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.714 [2024-07-12 11:22:00.883113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.714 [2024-07-12 11:22:00.883121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.714 [2024-07-12 11:22:00.883130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.714 [2024-07-12 11:22:00.883249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.714 [2024-07-12 11:22:00.883319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.714 [2024-07-12 11:22:00.883329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.282 [2024-07-12 11:22:01.381110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.282 [2024-07-12 11:22:01.414060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.282 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.283 NULL1 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=867570 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.283 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.542 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.542 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:15.542 11:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.542 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.542 11:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.111 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.111 11:22:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:16.111 11:22:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.111 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.111 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.370 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.370 11:22:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:16.370 11:22:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.370 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.370 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.630 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.630 11:22:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:16.630 11:22:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.630 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.630 11:22:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.910 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.910 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:16.910 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.910 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.910 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.167 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.167 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:17.167 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.167 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.167 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.733 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.733 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:17.733 11:22:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.733 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.733 11:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.991 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.991 11:22:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:17.991 11:22:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.991 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.991 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.248 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.248 11:22:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:18.248 11:22:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.248 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.248 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.505 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.505 11:22:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:18.505 11:22:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.505 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.505 11:22:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.763 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.763 11:22:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:18.763 11:22:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.763 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.763 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.383 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.383 11:22:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:19.383 11:22:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.383 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.383 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.641 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.641 11:22:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:19.641 11:22:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.641 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.641 11:22:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.899 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.899 11:22:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:19.899 11:22:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.899 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.899 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.159 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.159 11:22:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:20.159 11:22:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.159 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.159 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.418 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.418 11:22:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:20.418 11:22:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.418 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.418 11:22:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.985 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.985 11:22:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:20.985 11:22:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.985 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.985 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.244 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.244 11:22:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:21.244 11:22:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.244 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.244 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.503 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.503 11:22:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:21.503 11:22:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.503 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.503 11:22:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.762 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.762 11:22:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:21.762 11:22:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.762 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.762 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.329 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.329 11:22:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:22.329 11:22:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.329 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.329 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.589 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.589 11:22:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:22.589 11:22:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.589 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.589 11:22:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.848 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.848 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:22.848 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.848 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.848 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.106 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:23.106 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.106 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.363 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.363 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:23.363 11:22:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.363 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.363 11:22:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.929 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.929 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:23.929 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.929 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.929 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.188 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.188 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:24.188 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.188 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.188 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.447 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.447 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:24.447 11:22:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.447 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.447 11:22:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.705 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.705 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:24.705 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.705 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.705 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.273 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.273 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:25.273 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.273 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.273 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.560 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 867570 00:16:25.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (867570) - No such process 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 867570 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.560 rmmod nvme_tcp 00:16:25.560 rmmod nvme_fabrics 00:16:25.560 rmmod nvme_keyring 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 867463 ']' 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 867463 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 867463 ']' 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 867463 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 867463 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 867463' 00:16:25.560 killing process with pid 867463 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 867463 00:16:25.560 11:22:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 867463 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.937 11:22:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.840 11:22:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:29.099 00:16:29.099 real 0m20.189s 00:16:29.099 user 0m43.686s 00:16:29.099 sys 0m7.772s 00:16:29.099 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:29.099 11:22:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.099 ************************************ 00:16:29.099 END TEST nvmf_connect_stress 00:16:29.099 ************************************ 00:16:29.099 11:22:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:29.099 11:22:15 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:29.099 11:22:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:29.099 11:22:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.099 11:22:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.099 ************************************ 00:16:29.099 START TEST nvmf_fused_ordering 00:16:29.099 ************************************ 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:29.099 * Looking for test storage... 00:16:29.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.099 11:22:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.373 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:34.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:34.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:34.374 Found net devices under 0000:86:00.0: cvl_0_0 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:34.374 Found net devices under 0000:86:00.1: cvl_0_1 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:16:34.374 00:16:34.374 --- 10.0.0.2 ping statistics --- 00:16:34.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.374 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:16:34.374 00:16:34.374 --- 10.0.0.1 ping statistics --- 00:16:34.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.374 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=872866 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 872866 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 872866 ']' 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.374 11:22:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.633 [2024-07-12 11:22:20.731285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:34.633 [2024-07-12 11:22:20.731370] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.633 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.633 [2024-07-12 11:22:20.838557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.892 [2024-07-12 11:22:21.053966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.892 [2024-07-12 11:22:21.054012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.892 [2024-07-12 11:22:21.054024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.892 [2024-07-12 11:22:21.054036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.892 [2024-07-12 11:22:21.054046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.892 [2024-07-12 11:22:21.054082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.150 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.150 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:16:35.150 11:22:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.150 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.150 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.408 11:22:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.409 [2024-07-12 11:22:21.534421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.409 [2024-07-12 11:22:21.550620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.409 NULL1 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.409 11:22:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:35.409 [2024-07-12 11:22:21.621518] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:35.409 [2024-07-12 11:22:21.621590] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873113 ] 00:16:35.409 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.667 Attached to nqn.2016-06.io.spdk:cnode1 00:16:35.667 Namespace ID: 1 size: 1GB 00:16:35.667 fused_ordering(0) 00:16:35.667 fused_ordering(1) 00:16:35.667 fused_ordering(2) 00:16:35.667 fused_ordering(3) 00:16:35.667 fused_ordering(4) 00:16:35.667 fused_ordering(5) 00:16:35.667 fused_ordering(6) 00:16:35.667 fused_ordering(7) 00:16:35.667 fused_ordering(8) 00:16:35.667 fused_ordering(9) 00:16:35.667 fused_ordering(10) 00:16:35.667 fused_ordering(11) 00:16:35.667 fused_ordering(12) 00:16:35.667 fused_ordering(13) 00:16:35.667 fused_ordering(14) 00:16:35.667 fused_ordering(15) 00:16:35.667 fused_ordering(16) 00:16:35.667 fused_ordering(17) 00:16:35.667 fused_ordering(18) 00:16:35.667 fused_ordering(19) 00:16:35.667 fused_ordering(20) 00:16:35.667 fused_ordering(21) 00:16:35.667 fused_ordering(22) 00:16:35.667 fused_ordering(23) 00:16:35.667 fused_ordering(24) 00:16:35.667 fused_ordering(25) 00:16:35.667 fused_ordering(26) 00:16:35.667 fused_ordering(27) 00:16:35.667 fused_ordering(28) 00:16:35.667 fused_ordering(29) 00:16:35.667 fused_ordering(30) 00:16:35.667 fused_ordering(31) 00:16:35.667 fused_ordering(32) 00:16:35.667 fused_ordering(33) 00:16:35.667 fused_ordering(34) 00:16:35.667 fused_ordering(35) 00:16:35.667 fused_ordering(36) 00:16:35.668 fused_ordering(37) 00:16:35.668 fused_ordering(38) 00:16:35.668 fused_ordering(39) 00:16:35.668 fused_ordering(40) 00:16:35.668 fused_ordering(41) 00:16:35.668 fused_ordering(42) 00:16:35.668 fused_ordering(43) 00:16:35.668 fused_ordering(44) 00:16:35.668 fused_ordering(45) 00:16:35.668 fused_ordering(46) 00:16:35.668 fused_ordering(47) 00:16:35.668 fused_ordering(48) 00:16:35.668 fused_ordering(49) 00:16:35.668 fused_ordering(50) 00:16:35.668 fused_ordering(51) 00:16:35.668 fused_ordering(52) 00:16:35.668 fused_ordering(53) 00:16:35.668 fused_ordering(54) 00:16:35.668 fused_ordering(55) 00:16:35.668 fused_ordering(56) 00:16:35.668 fused_ordering(57) 00:16:35.668 fused_ordering(58) 00:16:35.668 fused_ordering(59) 00:16:35.668 fused_ordering(60) 00:16:35.668 fused_ordering(61) 00:16:35.668 fused_ordering(62) 00:16:35.668 fused_ordering(63) 00:16:35.668 fused_ordering(64) 00:16:35.668 fused_ordering(65) 00:16:35.668 fused_ordering(66) 00:16:35.668 fused_ordering(67) 00:16:35.668 fused_ordering(68) 00:16:35.668 fused_ordering(69) 00:16:35.668 fused_ordering(70) 00:16:35.668 fused_ordering(71) 00:16:35.668 fused_ordering(72) 00:16:35.668 fused_ordering(73) 00:16:35.668 fused_ordering(74) 00:16:35.668 fused_ordering(75) 00:16:35.668 fused_ordering(76) 00:16:35.668 fused_ordering(77) 00:16:35.668 fused_ordering(78) 00:16:35.668 fused_ordering(79) 00:16:35.668 fused_ordering(80) 00:16:35.668 fused_ordering(81) 00:16:35.668 fused_ordering(82) 00:16:35.668 fused_ordering(83) 00:16:35.668 fused_ordering(84) 00:16:35.668 fused_ordering(85) 00:16:35.668 fused_ordering(86) 00:16:35.668 fused_ordering(87) 00:16:35.668 fused_ordering(88) 00:16:35.668 fused_ordering(89) 00:16:35.668 fused_ordering(90) 00:16:35.668 fused_ordering(91) 00:16:35.668 fused_ordering(92) 00:16:35.668 fused_ordering(93) 00:16:35.668 fused_ordering(94) 00:16:35.668 fused_ordering(95) 00:16:35.668 fused_ordering(96) 00:16:35.668 fused_ordering(97) 00:16:35.668 fused_ordering(98) 00:16:35.668 fused_ordering(99) 00:16:35.668 fused_ordering(100) 00:16:35.668 fused_ordering(101) 00:16:35.668 fused_ordering(102) 00:16:35.668 fused_ordering(103) 00:16:35.668 fused_ordering(104) 00:16:35.668 fused_ordering(105) 00:16:35.668 fused_ordering(106) 00:16:35.668 fused_ordering(107) 00:16:35.668 fused_ordering(108) 00:16:35.668 fused_ordering(109) 00:16:35.668 fused_ordering(110) 00:16:35.668 fused_ordering(111) 00:16:35.668 fused_ordering(112) 00:16:35.668 fused_ordering(113) 00:16:35.668 fused_ordering(114) 00:16:35.668 fused_ordering(115) 00:16:35.668 fused_ordering(116) 00:16:35.668 fused_ordering(117) 00:16:35.668 fused_ordering(118) 00:16:35.668 fused_ordering(119) 00:16:35.668 fused_ordering(120) 00:16:35.668 fused_ordering(121) 00:16:35.668 fused_ordering(122) 00:16:35.668 fused_ordering(123) 00:16:35.668 fused_ordering(124) 00:16:35.668 fused_ordering(125) 00:16:35.668 fused_ordering(126) 00:16:35.668 fused_ordering(127) 00:16:35.668 fused_ordering(128) 00:16:35.668 fused_ordering(129) 00:16:35.668 fused_ordering(130) 00:16:35.668 fused_ordering(131) 00:16:35.668 fused_ordering(132) 00:16:35.668 fused_ordering(133) 00:16:35.668 fused_ordering(134) 00:16:35.668 fused_ordering(135) 00:16:35.668 fused_ordering(136) 00:16:35.668 fused_ordering(137) 00:16:35.668 fused_ordering(138) 00:16:35.668 fused_ordering(139) 00:16:35.668 fused_ordering(140) 00:16:35.668 fused_ordering(141) 00:16:35.668 fused_ordering(142) 00:16:35.668 fused_ordering(143) 00:16:35.668 fused_ordering(144) 00:16:35.668 fused_ordering(145) 00:16:35.668 fused_ordering(146) 00:16:35.668 fused_ordering(147) 00:16:35.668 fused_ordering(148) 00:16:35.668 fused_ordering(149) 00:16:35.668 fused_ordering(150) 00:16:35.668 fused_ordering(151) 00:16:35.668 fused_ordering(152) 00:16:35.668 fused_ordering(153) 00:16:35.668 fused_ordering(154) 00:16:35.668 fused_ordering(155) 00:16:35.668 fused_ordering(156) 00:16:35.668 fused_ordering(157) 00:16:35.668 fused_ordering(158) 00:16:35.668 fused_ordering(159) 00:16:35.668 fused_ordering(160) 00:16:35.668 fused_ordering(161) 00:16:35.668 fused_ordering(162) 00:16:35.668 fused_ordering(163) 00:16:35.668 fused_ordering(164) 00:16:35.668 fused_ordering(165) 00:16:35.668 fused_ordering(166) 00:16:35.668 fused_ordering(167) 00:16:35.668 fused_ordering(168) 00:16:35.668 fused_ordering(169) 00:16:35.668 fused_ordering(170) 00:16:35.668 fused_ordering(171) 00:16:35.668 fused_ordering(172) 00:16:35.668 fused_ordering(173) 00:16:35.668 fused_ordering(174) 00:16:35.668 fused_ordering(175) 00:16:35.668 fused_ordering(176) 00:16:35.668 fused_ordering(177) 00:16:35.668 fused_ordering(178) 00:16:35.668 fused_ordering(179) 00:16:35.668 fused_ordering(180) 00:16:35.668 fused_ordering(181) 00:16:35.668 fused_ordering(182) 00:16:35.668 fused_ordering(183) 00:16:35.668 fused_ordering(184) 00:16:35.668 fused_ordering(185) 00:16:35.668 fused_ordering(186) 00:16:35.668 fused_ordering(187) 00:16:35.668 fused_ordering(188) 00:16:35.668 fused_ordering(189) 00:16:35.668 fused_ordering(190) 00:16:35.668 fused_ordering(191) 00:16:35.668 fused_ordering(192) 00:16:35.668 fused_ordering(193) 00:16:35.668 fused_ordering(194) 00:16:35.668 fused_ordering(195) 00:16:35.668 fused_ordering(196) 00:16:35.668 fused_ordering(197) 00:16:35.668 fused_ordering(198) 00:16:35.668 fused_ordering(199) 00:16:35.668 fused_ordering(200) 00:16:35.668 fused_ordering(201) 00:16:35.668 fused_ordering(202) 00:16:35.668 fused_ordering(203) 00:16:35.668 fused_ordering(204) 00:16:35.668 fused_ordering(205) 00:16:36.235 fused_ordering(206) 00:16:36.235 fused_ordering(207) 00:16:36.235 fused_ordering(208) 00:16:36.235 fused_ordering(209) 00:16:36.235 fused_ordering(210) 00:16:36.235 fused_ordering(211) 00:16:36.235 fused_ordering(212) 00:16:36.235 fused_ordering(213) 00:16:36.235 fused_ordering(214) 00:16:36.235 fused_ordering(215) 00:16:36.235 fused_ordering(216) 00:16:36.235 fused_ordering(217) 00:16:36.235 fused_ordering(218) 00:16:36.235 fused_ordering(219) 00:16:36.235 fused_ordering(220) 00:16:36.235 fused_ordering(221) 00:16:36.235 fused_ordering(222) 00:16:36.235 fused_ordering(223) 00:16:36.235 fused_ordering(224) 00:16:36.235 fused_ordering(225) 00:16:36.235 fused_ordering(226) 00:16:36.235 fused_ordering(227) 00:16:36.235 fused_ordering(228) 00:16:36.235 fused_ordering(229) 00:16:36.235 fused_ordering(230) 00:16:36.235 fused_ordering(231) 00:16:36.235 fused_ordering(232) 00:16:36.235 fused_ordering(233) 00:16:36.235 fused_ordering(234) 00:16:36.235 fused_ordering(235) 00:16:36.235 fused_ordering(236) 00:16:36.235 fused_ordering(237) 00:16:36.235 fused_ordering(238) 00:16:36.235 fused_ordering(239) 00:16:36.236 fused_ordering(240) 00:16:36.236 fused_ordering(241) 00:16:36.236 fused_ordering(242) 00:16:36.236 fused_ordering(243) 00:16:36.236 fused_ordering(244) 00:16:36.236 fused_ordering(245) 00:16:36.236 fused_ordering(246) 00:16:36.236 fused_ordering(247) 00:16:36.236 fused_ordering(248) 00:16:36.236 fused_ordering(249) 00:16:36.236 fused_ordering(250) 00:16:36.236 fused_ordering(251) 00:16:36.236 fused_ordering(252) 00:16:36.236 fused_ordering(253) 00:16:36.236 fused_ordering(254) 00:16:36.236 fused_ordering(255) 00:16:36.236 fused_ordering(256) 00:16:36.236 fused_ordering(257) 00:16:36.236 fused_ordering(258) 00:16:36.236 fused_ordering(259) 00:16:36.236 fused_ordering(260) 00:16:36.236 fused_ordering(261) 00:16:36.236 fused_ordering(262) 00:16:36.236 fused_ordering(263) 00:16:36.236 fused_ordering(264) 00:16:36.236 fused_ordering(265) 00:16:36.236 fused_ordering(266) 00:16:36.236 fused_ordering(267) 00:16:36.236 fused_ordering(268) 00:16:36.236 fused_ordering(269) 00:16:36.236 fused_ordering(270) 00:16:36.236 fused_ordering(271) 00:16:36.236 fused_ordering(272) 00:16:36.236 fused_ordering(273) 00:16:36.236 fused_ordering(274) 00:16:36.236 fused_ordering(275) 00:16:36.236 fused_ordering(276) 00:16:36.236 fused_ordering(277) 00:16:36.236 fused_ordering(278) 00:16:36.236 fused_ordering(279) 00:16:36.236 fused_ordering(280) 00:16:36.236 fused_ordering(281) 00:16:36.236 fused_ordering(282) 00:16:36.236 fused_ordering(283) 00:16:36.236 fused_ordering(284) 00:16:36.236 fused_ordering(285) 00:16:36.236 fused_ordering(286) 00:16:36.236 fused_ordering(287) 00:16:36.236 fused_ordering(288) 00:16:36.236 fused_ordering(289) 00:16:36.236 fused_ordering(290) 00:16:36.236 fused_ordering(291) 00:16:36.236 fused_ordering(292) 00:16:36.236 fused_ordering(293) 00:16:36.236 fused_ordering(294) 00:16:36.236 fused_ordering(295) 00:16:36.236 fused_ordering(296) 00:16:36.236 fused_ordering(297) 00:16:36.236 fused_ordering(298) 00:16:36.236 fused_ordering(299) 00:16:36.236 fused_ordering(300) 00:16:36.236 fused_ordering(301) 00:16:36.236 fused_ordering(302) 00:16:36.236 fused_ordering(303) 00:16:36.236 fused_ordering(304) 00:16:36.236 fused_ordering(305) 00:16:36.236 fused_ordering(306) 00:16:36.236 fused_ordering(307) 00:16:36.236 fused_ordering(308) 00:16:36.236 fused_ordering(309) 00:16:36.236 fused_ordering(310) 00:16:36.236 fused_ordering(311) 00:16:36.236 fused_ordering(312) 00:16:36.236 fused_ordering(313) 00:16:36.236 fused_ordering(314) 00:16:36.236 fused_ordering(315) 00:16:36.236 fused_ordering(316) 00:16:36.236 fused_ordering(317) 00:16:36.236 fused_ordering(318) 00:16:36.236 fused_ordering(319) 00:16:36.236 fused_ordering(320) 00:16:36.236 fused_ordering(321) 00:16:36.236 fused_ordering(322) 00:16:36.236 fused_ordering(323) 00:16:36.236 fused_ordering(324) 00:16:36.236 fused_ordering(325) 00:16:36.236 fused_ordering(326) 00:16:36.236 fused_ordering(327) 00:16:36.236 fused_ordering(328) 00:16:36.236 fused_ordering(329) 00:16:36.236 fused_ordering(330) 00:16:36.236 fused_ordering(331) 00:16:36.236 fused_ordering(332) 00:16:36.236 fused_ordering(333) 00:16:36.236 fused_ordering(334) 00:16:36.236 fused_ordering(335) 00:16:36.236 fused_ordering(336) 00:16:36.236 fused_ordering(337) 00:16:36.236 fused_ordering(338) 00:16:36.236 fused_ordering(339) 00:16:36.236 fused_ordering(340) 00:16:36.236 fused_ordering(341) 00:16:36.236 fused_ordering(342) 00:16:36.236 fused_ordering(343) 00:16:36.236 fused_ordering(344) 00:16:36.236 fused_ordering(345) 00:16:36.236 fused_ordering(346) 00:16:36.236 fused_ordering(347) 00:16:36.236 fused_ordering(348) 00:16:36.236 fused_ordering(349) 00:16:36.236 fused_ordering(350) 00:16:36.236 fused_ordering(351) 00:16:36.236 fused_ordering(352) 00:16:36.236 fused_ordering(353) 00:16:36.236 fused_ordering(354) 00:16:36.236 fused_ordering(355) 00:16:36.236 fused_ordering(356) 00:16:36.236 fused_ordering(357) 00:16:36.236 fused_ordering(358) 00:16:36.236 fused_ordering(359) 00:16:36.236 fused_ordering(360) 00:16:36.236 fused_ordering(361) 00:16:36.236 fused_ordering(362) 00:16:36.236 fused_ordering(363) 00:16:36.236 fused_ordering(364) 00:16:36.236 fused_ordering(365) 00:16:36.236 fused_ordering(366) 00:16:36.236 fused_ordering(367) 00:16:36.236 fused_ordering(368) 00:16:36.236 fused_ordering(369) 00:16:36.236 fused_ordering(370) 00:16:36.236 fused_ordering(371) 00:16:36.236 fused_ordering(372) 00:16:36.236 fused_ordering(373) 00:16:36.236 fused_ordering(374) 00:16:36.236 fused_ordering(375) 00:16:36.236 fused_ordering(376) 00:16:36.236 fused_ordering(377) 00:16:36.236 fused_ordering(378) 00:16:36.236 fused_ordering(379) 00:16:36.236 fused_ordering(380) 00:16:36.236 fused_ordering(381) 00:16:36.236 fused_ordering(382) 00:16:36.236 fused_ordering(383) 00:16:36.236 fused_ordering(384) 00:16:36.236 fused_ordering(385) 00:16:36.236 fused_ordering(386) 00:16:36.236 fused_ordering(387) 00:16:36.236 fused_ordering(388) 00:16:36.236 fused_ordering(389) 00:16:36.236 fused_ordering(390) 00:16:36.236 fused_ordering(391) 00:16:36.236 fused_ordering(392) 00:16:36.236 fused_ordering(393) 00:16:36.236 fused_ordering(394) 00:16:36.236 fused_ordering(395) 00:16:36.236 fused_ordering(396) 00:16:36.236 fused_ordering(397) 00:16:36.236 fused_ordering(398) 00:16:36.236 fused_ordering(399) 00:16:36.236 fused_ordering(400) 00:16:36.236 fused_ordering(401) 00:16:36.236 fused_ordering(402) 00:16:36.236 fused_ordering(403) 00:16:36.236 fused_ordering(404) 00:16:36.236 fused_ordering(405) 00:16:36.236 fused_ordering(406) 00:16:36.236 fused_ordering(407) 00:16:36.236 fused_ordering(408) 00:16:36.236 fused_ordering(409) 00:16:36.236 fused_ordering(410) 00:16:36.495 fused_ordering(411) 00:16:36.495 fused_ordering(412) 00:16:36.495 fused_ordering(413) 00:16:36.495 fused_ordering(414) 00:16:36.495 fused_ordering(415) 00:16:36.495 fused_ordering(416) 00:16:36.495 fused_ordering(417) 00:16:36.495 fused_ordering(418) 00:16:36.495 fused_ordering(419) 00:16:36.495 fused_ordering(420) 00:16:36.495 fused_ordering(421) 00:16:36.495 fused_ordering(422) 00:16:36.495 fused_ordering(423) 00:16:36.495 fused_ordering(424) 00:16:36.495 fused_ordering(425) 00:16:36.495 fused_ordering(426) 00:16:36.495 fused_ordering(427) 00:16:36.495 fused_ordering(428) 00:16:36.495 fused_ordering(429) 00:16:36.495 fused_ordering(430) 00:16:36.495 fused_ordering(431) 00:16:36.495 fused_ordering(432) 00:16:36.495 fused_ordering(433) 00:16:36.495 fused_ordering(434) 00:16:36.495 fused_ordering(435) 00:16:36.495 fused_ordering(436) 00:16:36.495 fused_ordering(437) 00:16:36.495 fused_ordering(438) 00:16:36.495 fused_ordering(439) 00:16:36.495 fused_ordering(440) 00:16:36.495 fused_ordering(441) 00:16:36.495 fused_ordering(442) 00:16:36.495 fused_ordering(443) 00:16:36.495 fused_ordering(444) 00:16:36.495 fused_ordering(445) 00:16:36.495 fused_ordering(446) 00:16:36.495 fused_ordering(447) 00:16:36.495 fused_ordering(448) 00:16:36.495 fused_ordering(449) 00:16:36.495 fused_ordering(450) 00:16:36.495 fused_ordering(451) 00:16:36.495 fused_ordering(452) 00:16:36.495 fused_ordering(453) 00:16:36.495 fused_ordering(454) 00:16:36.495 fused_ordering(455) 00:16:36.495 fused_ordering(456) 00:16:36.495 fused_ordering(457) 00:16:36.495 fused_ordering(458) 00:16:36.495 fused_ordering(459) 00:16:36.495 fused_ordering(460) 00:16:36.495 fused_ordering(461) 00:16:36.495 fused_ordering(462) 00:16:36.495 fused_ordering(463) 00:16:36.495 fused_ordering(464) 00:16:36.495 fused_ordering(465) 00:16:36.495 fused_ordering(466) 00:16:36.495 fused_ordering(467) 00:16:36.495 fused_ordering(468) 00:16:36.495 fused_ordering(469) 00:16:36.495 fused_ordering(470) 00:16:36.495 fused_ordering(471) 00:16:36.495 fused_ordering(472) 00:16:36.495 fused_ordering(473) 00:16:36.495 fused_ordering(474) 00:16:36.495 fused_ordering(475) 00:16:36.495 fused_ordering(476) 00:16:36.495 fused_ordering(477) 00:16:36.495 fused_ordering(478) 00:16:36.495 fused_ordering(479) 00:16:36.495 fused_ordering(480) 00:16:36.495 fused_ordering(481) 00:16:36.495 fused_ordering(482) 00:16:36.495 fused_ordering(483) 00:16:36.495 fused_ordering(484) 00:16:36.495 fused_ordering(485) 00:16:36.495 fused_ordering(486) 00:16:36.495 fused_ordering(487) 00:16:36.495 fused_ordering(488) 00:16:36.496 fused_ordering(489) 00:16:36.496 fused_ordering(490) 00:16:36.496 fused_ordering(491) 00:16:36.496 fused_ordering(492) 00:16:36.496 fused_ordering(493) 00:16:36.496 fused_ordering(494) 00:16:36.496 fused_ordering(495) 00:16:36.496 fused_ordering(496) 00:16:36.496 fused_ordering(497) 00:16:36.496 fused_ordering(498) 00:16:36.496 fused_ordering(499) 00:16:36.496 fused_ordering(500) 00:16:36.496 fused_ordering(501) 00:16:36.496 fused_ordering(502) 00:16:36.496 fused_ordering(503) 00:16:36.496 fused_ordering(504) 00:16:36.496 fused_ordering(505) 00:16:36.496 fused_ordering(506) 00:16:36.496 fused_ordering(507) 00:16:36.496 fused_ordering(508) 00:16:36.496 fused_ordering(509) 00:16:36.496 fused_ordering(510) 00:16:36.496 fused_ordering(511) 00:16:36.496 fused_ordering(512) 00:16:36.496 fused_ordering(513) 00:16:36.496 fused_ordering(514) 00:16:36.496 fused_ordering(515) 00:16:36.496 fused_ordering(516) 00:16:36.496 fused_ordering(517) 00:16:36.496 fused_ordering(518) 00:16:36.496 fused_ordering(519) 00:16:36.496 fused_ordering(520) 00:16:36.496 fused_ordering(521) 00:16:36.496 fused_ordering(522) 00:16:36.496 fused_ordering(523) 00:16:36.496 fused_ordering(524) 00:16:36.496 fused_ordering(525) 00:16:36.496 fused_ordering(526) 00:16:36.496 fused_ordering(527) 00:16:36.496 fused_ordering(528) 00:16:36.496 fused_ordering(529) 00:16:36.496 fused_ordering(530) 00:16:36.496 fused_ordering(531) 00:16:36.496 fused_ordering(532) 00:16:36.496 fused_ordering(533) 00:16:36.496 fused_ordering(534) 00:16:36.496 fused_ordering(535) 00:16:36.496 fused_ordering(536) 00:16:36.496 fused_ordering(537) 00:16:36.496 fused_ordering(538) 00:16:36.496 fused_ordering(539) 00:16:36.496 fused_ordering(540) 00:16:36.496 fused_ordering(541) 00:16:36.496 fused_ordering(542) 00:16:36.496 fused_ordering(543) 00:16:36.496 fused_ordering(544) 00:16:36.496 fused_ordering(545) 00:16:36.496 fused_ordering(546) 00:16:36.496 fused_ordering(547) 00:16:36.496 fused_ordering(548) 00:16:36.496 fused_ordering(549) 00:16:36.496 fused_ordering(550) 00:16:36.496 fused_ordering(551) 00:16:36.496 fused_ordering(552) 00:16:36.496 fused_ordering(553) 00:16:36.496 fused_ordering(554) 00:16:36.496 fused_ordering(555) 00:16:36.496 fused_ordering(556) 00:16:36.496 fused_ordering(557) 00:16:36.496 fused_ordering(558) 00:16:36.496 fused_ordering(559) 00:16:36.496 fused_ordering(560) 00:16:36.496 fused_ordering(561) 00:16:36.496 fused_ordering(562) 00:16:36.496 fused_ordering(563) 00:16:36.496 fused_ordering(564) 00:16:36.496 fused_ordering(565) 00:16:36.496 fused_ordering(566) 00:16:36.496 fused_ordering(567) 00:16:36.496 fused_ordering(568) 00:16:36.496 fused_ordering(569) 00:16:36.496 fused_ordering(570) 00:16:36.496 fused_ordering(571) 00:16:36.496 fused_ordering(572) 00:16:36.496 fused_ordering(573) 00:16:36.496 fused_ordering(574) 00:16:36.496 fused_ordering(575) 00:16:36.496 fused_ordering(576) 00:16:36.496 fused_ordering(577) 00:16:36.496 fused_ordering(578) 00:16:36.496 fused_ordering(579) 00:16:36.496 fused_ordering(580) 00:16:36.496 fused_ordering(581) 00:16:36.496 fused_ordering(582) 00:16:36.496 fused_ordering(583) 00:16:36.496 fused_ordering(584) 00:16:36.496 fused_ordering(585) 00:16:36.496 fused_ordering(586) 00:16:36.496 fused_ordering(587) 00:16:36.496 fused_ordering(588) 00:16:36.496 fused_ordering(589) 00:16:36.496 fused_ordering(590) 00:16:36.496 fused_ordering(591) 00:16:36.496 fused_ordering(592) 00:16:36.496 fused_ordering(593) 00:16:36.496 fused_ordering(594) 00:16:36.496 fused_ordering(595) 00:16:36.496 fused_ordering(596) 00:16:36.496 fused_ordering(597) 00:16:36.496 fused_ordering(598) 00:16:36.496 fused_ordering(599) 00:16:36.496 fused_ordering(600) 00:16:36.496 fused_ordering(601) 00:16:36.496 fused_ordering(602) 00:16:36.496 fused_ordering(603) 00:16:36.496 fused_ordering(604) 00:16:36.496 fused_ordering(605) 00:16:36.496 fused_ordering(606) 00:16:36.496 fused_ordering(607) 00:16:36.496 fused_ordering(608) 00:16:36.496 fused_ordering(609) 00:16:36.496 fused_ordering(610) 00:16:36.496 fused_ordering(611) 00:16:36.496 fused_ordering(612) 00:16:36.496 fused_ordering(613) 00:16:36.496 fused_ordering(614) 00:16:36.496 fused_ordering(615) 00:16:37.063 fused_ordering(616) 00:16:37.063 fused_ordering(617) 00:16:37.063 fused_ordering(618) 00:16:37.063 fused_ordering(619) 00:16:37.063 fused_ordering(620) 00:16:37.063 fused_ordering(621) 00:16:37.063 fused_ordering(622) 00:16:37.063 fused_ordering(623) 00:16:37.063 fused_ordering(624) 00:16:37.063 fused_ordering(625) 00:16:37.063 fused_ordering(626) 00:16:37.063 fused_ordering(627) 00:16:37.063 fused_ordering(628) 00:16:37.063 fused_ordering(629) 00:16:37.063 fused_ordering(630) 00:16:37.063 fused_ordering(631) 00:16:37.063 fused_ordering(632) 00:16:37.063 fused_ordering(633) 00:16:37.063 fused_ordering(634) 00:16:37.063 fused_ordering(635) 00:16:37.063 fused_ordering(636) 00:16:37.063 fused_ordering(637) 00:16:37.063 fused_ordering(638) 00:16:37.063 fused_ordering(639) 00:16:37.063 fused_ordering(640) 00:16:37.063 fused_ordering(641) 00:16:37.063 fused_ordering(642) 00:16:37.063 fused_ordering(643) 00:16:37.063 fused_ordering(644) 00:16:37.063 fused_ordering(645) 00:16:37.063 fused_ordering(646) 00:16:37.063 fused_ordering(647) 00:16:37.063 fused_ordering(648) 00:16:37.063 fused_ordering(649) 00:16:37.063 fused_ordering(650) 00:16:37.063 fused_ordering(651) 00:16:37.063 fused_ordering(652) 00:16:37.063 fused_ordering(653) 00:16:37.063 fused_ordering(654) 00:16:37.063 fused_ordering(655) 00:16:37.063 fused_ordering(656) 00:16:37.063 fused_ordering(657) 00:16:37.063 fused_ordering(658) 00:16:37.063 fused_ordering(659) 00:16:37.063 fused_ordering(660) 00:16:37.063 fused_ordering(661) 00:16:37.063 fused_ordering(662) 00:16:37.063 fused_ordering(663) 00:16:37.063 fused_ordering(664) 00:16:37.063 fused_ordering(665) 00:16:37.063 fused_ordering(666) 00:16:37.063 fused_ordering(667) 00:16:37.063 fused_ordering(668) 00:16:37.063 fused_ordering(669) 00:16:37.063 fused_ordering(670) 00:16:37.063 fused_ordering(671) 00:16:37.063 fused_ordering(672) 00:16:37.063 fused_ordering(673) 00:16:37.063 fused_ordering(674) 00:16:37.063 fused_ordering(675) 00:16:37.063 fused_ordering(676) 00:16:37.063 fused_ordering(677) 00:16:37.063 fused_ordering(678) 00:16:37.063 fused_ordering(679) 00:16:37.063 fused_ordering(680) 00:16:37.063 fused_ordering(681) 00:16:37.063 fused_ordering(682) 00:16:37.063 fused_ordering(683) 00:16:37.063 fused_ordering(684) 00:16:37.063 fused_ordering(685) 00:16:37.063 fused_ordering(686) 00:16:37.063 fused_ordering(687) 00:16:37.063 fused_ordering(688) 00:16:37.063 fused_ordering(689) 00:16:37.063 fused_ordering(690) 00:16:37.063 fused_ordering(691) 00:16:37.063 fused_ordering(692) 00:16:37.063 fused_ordering(693) 00:16:37.063 fused_ordering(694) 00:16:37.063 fused_ordering(695) 00:16:37.063 fused_ordering(696) 00:16:37.063 fused_ordering(697) 00:16:37.063 fused_ordering(698) 00:16:37.063 fused_ordering(699) 00:16:37.063 fused_ordering(700) 00:16:37.063 fused_ordering(701) 00:16:37.063 fused_ordering(702) 00:16:37.063 fused_ordering(703) 00:16:37.063 fused_ordering(704) 00:16:37.063 fused_ordering(705) 00:16:37.063 fused_ordering(706) 00:16:37.063 fused_ordering(707) 00:16:37.063 fused_ordering(708) 00:16:37.063 fused_ordering(709) 00:16:37.063 fused_ordering(710) 00:16:37.063 fused_ordering(711) 00:16:37.063 fused_ordering(712) 00:16:37.063 fused_ordering(713) 00:16:37.063 fused_ordering(714) 00:16:37.063 fused_ordering(715) 00:16:37.063 fused_ordering(716) 00:16:37.063 fused_ordering(717) 00:16:37.063 fused_ordering(718) 00:16:37.063 fused_ordering(719) 00:16:37.063 fused_ordering(720) 00:16:37.063 fused_ordering(721) 00:16:37.063 fused_ordering(722) 00:16:37.063 fused_ordering(723) 00:16:37.063 fused_ordering(724) 00:16:37.063 fused_ordering(725) 00:16:37.063 fused_ordering(726) 00:16:37.063 fused_ordering(727) 00:16:37.063 fused_ordering(728) 00:16:37.063 fused_ordering(729) 00:16:37.063 fused_ordering(730) 00:16:37.063 fused_ordering(731) 00:16:37.063 fused_ordering(732) 00:16:37.063 fused_ordering(733) 00:16:37.063 fused_ordering(734) 00:16:37.063 fused_ordering(735) 00:16:37.063 fused_ordering(736) 00:16:37.063 fused_ordering(737) 00:16:37.063 fused_ordering(738) 00:16:37.063 fused_ordering(739) 00:16:37.063 fused_ordering(740) 00:16:37.063 fused_ordering(741) 00:16:37.063 fused_ordering(742) 00:16:37.063 fused_ordering(743) 00:16:37.063 fused_ordering(744) 00:16:37.063 fused_ordering(745) 00:16:37.063 fused_ordering(746) 00:16:37.063 fused_ordering(747) 00:16:37.063 fused_ordering(748) 00:16:37.063 fused_ordering(749) 00:16:37.063 fused_ordering(750) 00:16:37.063 fused_ordering(751) 00:16:37.063 fused_ordering(752) 00:16:37.063 fused_ordering(753) 00:16:37.063 fused_ordering(754) 00:16:37.063 fused_ordering(755) 00:16:37.063 fused_ordering(756) 00:16:37.063 fused_ordering(757) 00:16:37.063 fused_ordering(758) 00:16:37.063 fused_ordering(759) 00:16:37.063 fused_ordering(760) 00:16:37.063 fused_ordering(761) 00:16:37.063 fused_ordering(762) 00:16:37.063 fused_ordering(763) 00:16:37.063 fused_ordering(764) 00:16:37.063 fused_ordering(765) 00:16:37.063 fused_ordering(766) 00:16:37.063 fused_ordering(767) 00:16:37.063 fused_ordering(768) 00:16:37.063 fused_ordering(769) 00:16:37.063 fused_ordering(770) 00:16:37.063 fused_ordering(771) 00:16:37.063 fused_ordering(772) 00:16:37.063 fused_ordering(773) 00:16:37.063 fused_ordering(774) 00:16:37.063 fused_ordering(775) 00:16:37.063 fused_ordering(776) 00:16:37.063 fused_ordering(777) 00:16:37.063 fused_ordering(778) 00:16:37.063 fused_ordering(779) 00:16:37.063 fused_ordering(780) 00:16:37.063 fused_ordering(781) 00:16:37.063 fused_ordering(782) 00:16:37.063 fused_ordering(783) 00:16:37.063 fused_ordering(784) 00:16:37.063 fused_ordering(785) 00:16:37.063 fused_ordering(786) 00:16:37.063 fused_ordering(787) 00:16:37.063 fused_ordering(788) 00:16:37.063 fused_ordering(789) 00:16:37.063 fused_ordering(790) 00:16:37.063 fused_ordering(791) 00:16:37.063 fused_ordering(792) 00:16:37.063 fused_ordering(793) 00:16:37.063 fused_ordering(794) 00:16:37.063 fused_ordering(795) 00:16:37.063 fused_ordering(796) 00:16:37.063 fused_ordering(797) 00:16:37.063 fused_ordering(798) 00:16:37.063 fused_ordering(799) 00:16:37.063 fused_ordering(800) 00:16:37.063 fused_ordering(801) 00:16:37.063 fused_ordering(802) 00:16:37.063 fused_ordering(803) 00:16:37.063 fused_ordering(804) 00:16:37.063 fused_ordering(805) 00:16:37.063 fused_ordering(806) 00:16:37.063 fused_ordering(807) 00:16:37.063 fused_ordering(808) 00:16:37.063 fused_ordering(809) 00:16:37.063 fused_ordering(810) 00:16:37.063 fused_ordering(811) 00:16:37.063 fused_ordering(812) 00:16:37.063 fused_ordering(813) 00:16:37.063 fused_ordering(814) 00:16:37.063 fused_ordering(815) 00:16:37.063 fused_ordering(816) 00:16:37.063 fused_ordering(817) 00:16:37.063 fused_ordering(818) 00:16:37.063 fused_ordering(819) 00:16:37.063 fused_ordering(820) 00:16:37.631 fused_ordering(821) 00:16:37.631 fused_ordering(822) 00:16:37.631 fused_ordering(823) 00:16:37.631 fused_ordering(824) 00:16:37.631 fused_ordering(825) 00:16:37.631 fused_ordering(826) 00:16:37.631 fused_ordering(827) 00:16:37.631 fused_ordering(828) 00:16:37.631 fused_ordering(829) 00:16:37.631 fused_ordering(830) 00:16:37.631 fused_ordering(831) 00:16:37.631 fused_ordering(832) 00:16:37.631 fused_ordering(833) 00:16:37.631 fused_ordering(834) 00:16:37.631 fused_ordering(835) 00:16:37.631 fused_ordering(836) 00:16:37.631 fused_ordering(837) 00:16:37.631 fused_ordering(838) 00:16:37.631 fused_ordering(839) 00:16:37.631 fused_ordering(840) 00:16:37.631 fused_ordering(841) 00:16:37.631 fused_ordering(842) 00:16:37.631 fused_ordering(843) 00:16:37.631 fused_ordering(844) 00:16:37.631 fused_ordering(845) 00:16:37.631 fused_ordering(846) 00:16:37.631 fused_ordering(847) 00:16:37.631 fused_ordering(848) 00:16:37.631 fused_ordering(849) 00:16:37.631 fused_ordering(850) 00:16:37.631 fused_ordering(851) 00:16:37.631 fused_ordering(852) 00:16:37.631 fused_ordering(853) 00:16:37.631 fused_ordering(854) 00:16:37.631 fused_ordering(855) 00:16:37.631 fused_ordering(856) 00:16:37.631 fused_ordering(857) 00:16:37.631 fused_ordering(858) 00:16:37.631 fused_ordering(859) 00:16:37.631 fused_ordering(860) 00:16:37.631 fused_ordering(861) 00:16:37.631 fused_ordering(862) 00:16:37.631 fused_ordering(863) 00:16:37.631 fused_ordering(864) 00:16:37.631 fused_ordering(865) 00:16:37.631 fused_ordering(866) 00:16:37.631 fused_ordering(867) 00:16:37.631 fused_ordering(868) 00:16:37.631 fused_ordering(869) 00:16:37.631 fused_ordering(870) 00:16:37.631 fused_ordering(871) 00:16:37.631 fused_ordering(872) 00:16:37.631 fused_ordering(873) 00:16:37.631 fused_ordering(874) 00:16:37.631 fused_ordering(875) 00:16:37.631 fused_ordering(876) 00:16:37.631 fused_ordering(877) 00:16:37.631 fused_ordering(878) 00:16:37.631 fused_ordering(879) 00:16:37.631 fused_ordering(880) 00:16:37.631 fused_ordering(881) 00:16:37.631 fused_ordering(882) 00:16:37.631 fused_ordering(883) 00:16:37.631 fused_ordering(884) 00:16:37.631 fused_ordering(885) 00:16:37.631 fused_ordering(886) 00:16:37.631 fused_ordering(887) 00:16:37.631 fused_ordering(888) 00:16:37.631 fused_ordering(889) 00:16:37.631 fused_ordering(890) 00:16:37.632 fused_ordering(891) 00:16:37.632 fused_ordering(892) 00:16:37.632 fused_ordering(893) 00:16:37.632 fused_ordering(894) 00:16:37.632 fused_ordering(895) 00:16:37.632 fused_ordering(896) 00:16:37.632 fused_ordering(897) 00:16:37.632 fused_ordering(898) 00:16:37.632 fused_ordering(899) 00:16:37.632 fused_ordering(900) 00:16:37.632 fused_ordering(901) 00:16:37.632 fused_ordering(902) 00:16:37.632 fused_ordering(903) 00:16:37.632 fused_ordering(904) 00:16:37.632 fused_ordering(905) 00:16:37.632 fused_ordering(906) 00:16:37.632 fused_ordering(907) 00:16:37.632 fused_ordering(908) 00:16:37.632 fused_ordering(909) 00:16:37.632 fused_ordering(910) 00:16:37.632 fused_ordering(911) 00:16:37.632 fused_ordering(912) 00:16:37.632 fused_ordering(913) 00:16:37.632 fused_ordering(914) 00:16:37.632 fused_ordering(915) 00:16:37.632 fused_ordering(916) 00:16:37.632 fused_ordering(917) 00:16:37.632 fused_ordering(918) 00:16:37.632 fused_ordering(919) 00:16:37.632 fused_ordering(920) 00:16:37.632 fused_ordering(921) 00:16:37.632 fused_ordering(922) 00:16:37.632 fused_ordering(923) 00:16:37.632 fused_ordering(924) 00:16:37.632 fused_ordering(925) 00:16:37.632 fused_ordering(926) 00:16:37.632 fused_ordering(927) 00:16:37.632 fused_ordering(928) 00:16:37.632 fused_ordering(929) 00:16:37.632 fused_ordering(930) 00:16:37.632 fused_ordering(931) 00:16:37.632 fused_ordering(932) 00:16:37.632 fused_ordering(933) 00:16:37.632 fused_ordering(934) 00:16:37.632 fused_ordering(935) 00:16:37.632 fused_ordering(936) 00:16:37.632 fused_ordering(937) 00:16:37.632 fused_ordering(938) 00:16:37.632 fused_ordering(939) 00:16:37.632 fused_ordering(940) 00:16:37.632 fused_ordering(941) 00:16:37.632 fused_ordering(942) 00:16:37.632 fused_ordering(943) 00:16:37.632 fused_ordering(944) 00:16:37.632 fused_ordering(945) 00:16:37.632 fused_ordering(946) 00:16:37.632 fused_ordering(947) 00:16:37.632 fused_ordering(948) 00:16:37.632 fused_ordering(949) 00:16:37.632 fused_ordering(950) 00:16:37.632 fused_ordering(951) 00:16:37.632 fused_ordering(952) 00:16:37.632 fused_ordering(953) 00:16:37.632 fused_ordering(954) 00:16:37.632 fused_ordering(955) 00:16:37.632 fused_ordering(956) 00:16:37.632 fused_ordering(957) 00:16:37.632 fused_ordering(958) 00:16:37.632 fused_ordering(959) 00:16:37.632 fused_ordering(960) 00:16:37.632 fused_ordering(961) 00:16:37.632 fused_ordering(962) 00:16:37.632 fused_ordering(963) 00:16:37.632 fused_ordering(964) 00:16:37.632 fused_ordering(965) 00:16:37.632 fused_ordering(966) 00:16:37.632 fused_ordering(967) 00:16:37.632 fused_ordering(968) 00:16:37.632 fused_ordering(969) 00:16:37.632 fused_ordering(970) 00:16:37.632 fused_ordering(971) 00:16:37.632 fused_ordering(972) 00:16:37.632 fused_ordering(973) 00:16:37.632 fused_ordering(974) 00:16:37.632 fused_ordering(975) 00:16:37.632 fused_ordering(976) 00:16:37.632 fused_ordering(977) 00:16:37.632 fused_ordering(978) 00:16:37.632 fused_ordering(979) 00:16:37.632 fused_ordering(980) 00:16:37.632 fused_ordering(981) 00:16:37.632 fused_ordering(982) 00:16:37.632 fused_ordering(983) 00:16:37.632 fused_ordering(984) 00:16:37.632 fused_ordering(985) 00:16:37.632 fused_ordering(986) 00:16:37.632 fused_ordering(987) 00:16:37.632 fused_ordering(988) 00:16:37.632 fused_ordering(989) 00:16:37.632 fused_ordering(990) 00:16:37.632 fused_ordering(991) 00:16:37.632 fused_ordering(992) 00:16:37.632 fused_ordering(993) 00:16:37.632 fused_ordering(994) 00:16:37.632 fused_ordering(995) 00:16:37.632 fused_ordering(996) 00:16:37.632 fused_ordering(997) 00:16:37.632 fused_ordering(998) 00:16:37.632 fused_ordering(999) 00:16:37.632 fused_ordering(1000) 00:16:37.632 fused_ordering(1001) 00:16:37.632 fused_ordering(1002) 00:16:37.632 fused_ordering(1003) 00:16:37.632 fused_ordering(1004) 00:16:37.632 fused_ordering(1005) 00:16:37.632 fused_ordering(1006) 00:16:37.632 fused_ordering(1007) 00:16:37.632 fused_ordering(1008) 00:16:37.632 fused_ordering(1009) 00:16:37.632 fused_ordering(1010) 00:16:37.632 fused_ordering(1011) 00:16:37.632 fused_ordering(1012) 00:16:37.632 fused_ordering(1013) 00:16:37.632 fused_ordering(1014) 00:16:37.632 fused_ordering(1015) 00:16:37.632 fused_ordering(1016) 00:16:37.632 fused_ordering(1017) 00:16:37.632 fused_ordering(1018) 00:16:37.632 fused_ordering(1019) 00:16:37.632 fused_ordering(1020) 00:16:37.632 fused_ordering(1021) 00:16:37.632 fused_ordering(1022) 00:16:37.632 fused_ordering(1023) 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.632 rmmod nvme_tcp 00:16:37.632 rmmod nvme_fabrics 00:16:37.632 rmmod nvme_keyring 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 872866 ']' 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 872866 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 872866 ']' 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 872866 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872866 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872866' 00:16:37.632 killing process with pid 872866 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 872866 00:16:37.632 11:22:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 872866 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.010 11:22:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.916 11:22:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:40.916 00:16:40.917 real 0m11.904s 00:16:40.917 user 0m6.960s 00:16:40.917 sys 0m5.515s 00:16:40.917 11:22:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.917 11:22:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:40.917 ************************************ 00:16:40.917 END TEST nvmf_fused_ordering 00:16:40.917 ************************************ 00:16:40.917 11:22:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:40.917 11:22:27 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:40.917 11:22:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:40.917 11:22:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.917 11:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:40.917 ************************************ 00:16:40.917 START TEST nvmf_delete_subsystem 00:16:40.917 ************************************ 00:16:40.917 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:41.176 * Looking for test storage... 00:16:41.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.176 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.177 11:22:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:46.461 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:46.461 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:46.461 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:46.462 Found net devices under 0000:86:00.0: cvl_0_0 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:46.462 Found net devices under 0000:86:00.1: cvl_0_1 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:46.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:16:46.462 00:16:46.462 --- 10.0.0.2 ping statistics --- 00:16:46.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.462 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:16:46.462 00:16:46.462 --- 10.0.0.1 ping statistics --- 00:16:46.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.462 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=877101 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 877101 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 877101 ']' 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.462 11:22:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:46.462 [2024-07-12 11:22:32.658610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:46.462 [2024-07-12 11:22:32.658694] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.462 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.462 [2024-07-12 11:22:32.767073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.722 [2024-07-12 11:22:32.978282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.722 [2024-07-12 11:22:32.978325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.722 [2024-07-12 11:22:32.978339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.722 [2024-07-12 11:22:32.978363] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.722 [2024-07-12 11:22:32.978373] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.722 [2024-07-12 11:22:32.978457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.722 [2024-07-12 11:22:32.978469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.290 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.290 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:16:47.290 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:47.290 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 [2024-07-12 11:22:33.472478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 [2024-07-12 11:22:33.492670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 NULL1 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 Delay0 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=877158 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:47.291 11:22:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:47.291 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.291 [2024-07-12 11:22:33.604589] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:49.199 11:22:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.199 11:22:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.199 11:22:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 [2024-07-12 11:22:35.788835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 [2024-07-12 11:22:35.789502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 [2024-07-12 11:22:35.790395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 starting I/O failed: -6 00:16:49.459 Read completed with error (sct=0, sc=8) 00:16:49.459 Write completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 Read completed with error (sct=0, sc=8) 00:16:49.460 starting I/O failed: -6 00:16:49.460 Write completed with error (sct=0, sc=8) 00:16:49.460 [2024-07-12 11:22:35.791469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(5) to be set 00:16:50.396 [2024-07-12 11:22:36.743669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001de00 is same with the state(5) to be set 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 [2024-07-12 11:22:36.792424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Write completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 Read completed with error (sct=0, sc=8) 00:16:50.683 [2024-07-12 11:22:36.794064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(5) to be set 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 [2024-07-12 11:22:36.794342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(5) to be set 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Write completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 Read completed with error (sct=0, sc=8) 00:16:50.684 [2024-07-12 11:22:36.795106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e580 is same with the state(5) to be set 00:16:50.684 11:22:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.684 11:22:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:50.684 11:22:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 877158 00:16:50.684 11:22:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:50.684 Initializing NVMe Controllers 00:16:50.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:50.684 Controller IO queue size 128, less than required. 00:16:50.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:50.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:50.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:50.684 Initialization complete. Launching workers. 00:16:50.684 ======================================================== 00:16:50.684 Latency(us) 00:16:50.684 Device Information : IOPS MiB/s Average min max 00:16:50.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.26 0.10 944938.50 1407.11 1014256.99 00:16:50.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.64 0.07 886442.68 551.72 1014115.36 00:16:50.684 ======================================================== 00:16:50.684 Total : 347.90 0.17 919273.67 551.72 1014256.99 00:16:50.684 00:16:50.684 [2024-07-12 11:22:36.800944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001de00 (9): Bad file descriptor 00:16:50.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 877158 00:16:51.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (877158) - No such process 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 877158 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 877158 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 877158 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 [2024-07-12 11:22:37.321983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=877823 00:16:51.253 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:51.254 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:51.254 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:51.254 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:51.254 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.254 [2024-07-12 11:22:37.420691] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:51.513 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:51.513 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:51.513 11:22:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:52.080 11:22:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:52.080 11:22:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:52.080 11:22:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:52.648 11:22:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:52.648 11:22:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:52.648 11:22:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.216 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:53.216 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:53.216 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.783 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:53.783 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:53.783 11:22:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.042 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.042 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:54.042 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.301 Initializing NVMe Controllers 00:16:54.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.301 Controller IO queue size 128, less than required. 00:16:54.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:54.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:54.301 Initialization complete. Launching workers. 00:16:54.301 ======================================================== 00:16:54.301 Latency(us) 00:16:54.301 Device Information : IOPS MiB/s Average min max 00:16:54.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003992.65 1000158.36 1042005.49 00:16:54.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006434.95 1000420.67 1042170.33 00:16:54.301 ======================================================== 00:16:54.301 Total : 256.00 0.12 1005213.80 1000158.36 1042170.33 00:16:54.301 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 877823 00:16:54.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (877823) - No such process 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 877823 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.560 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.560 rmmod nvme_tcp 00:16:54.560 rmmod nvme_fabrics 00:16:54.560 rmmod nvme_keyring 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 877101 ']' 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 877101 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 877101 ']' 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 877101 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 877101 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 877101' 00:16:54.819 killing process with pid 877101 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 877101 00:16:54.819 11:22:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 877101 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.193 11:22:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.094 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:58.094 00:16:58.094 real 0m17.112s 00:16:58.094 user 0m31.880s 00:16:58.094 sys 0m4.940s 00:16:58.094 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:58.094 11:22:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:58.094 ************************************ 00:16:58.094 END TEST nvmf_delete_subsystem 00:16:58.094 ************************************ 00:16:58.094 11:22:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:58.094 11:22:44 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:58.094 11:22:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:58.094 11:22:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.094 11:22:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:58.094 ************************************ 00:16:58.094 START TEST nvmf_ns_masking 00:16:58.094 ************************************ 00:16:58.094 11:22:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:58.353 * Looking for test storage... 00:16:58.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3984f022-ef06-49f0-98b7-8e97d73e65ec 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=af701482-daa8-4fa4-8485-75ade014ffab 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3bb75705-a0b7-4d34-b46a-4534a753468d 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:58.353 11:22:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:03.625 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:03.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:03.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:03.626 Found net devices under 0000:86:00.0: cvl_0_0 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:03.626 Found net devices under 0000:86:00.1: cvl_0_1 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:03.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:17:03.626 00:17:03.626 --- 10.0.0.2 ping statistics --- 00:17:03.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.626 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:17:03.626 00:17:03.626 --- 10.0.0.1 ping statistics --- 00:17:03.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.626 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=882044 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 882044 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 882044 ']' 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.626 11:22:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:03.886 [2024-07-12 11:22:50.032787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:03.886 [2024-07-12 11:22:50.032896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.886 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.886 [2024-07-12 11:22:50.140068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.146 [2024-07-12 11:22:50.361717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.146 [2024-07-12 11:22:50.361764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.146 [2024-07-12 11:22:50.361776] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.146 [2024-07-12 11:22:50.361803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.146 [2024-07-12 11:22:50.361813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.146 [2024-07-12 11:22:50.361838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.714 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.714 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:04.714 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.714 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.714 11:22:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:04.714 11:22:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.714 11:22:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:04.714 [2024-07-12 11:22:51.025273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.714 11:22:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:04.714 11:22:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:04.714 11:22:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:04.973 Malloc1 00:17:04.973 11:22:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:05.232 Malloc2 00:17:05.232 11:22:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:05.491 11:22:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:05.750 11:22:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.750 [2024-07-12 11:22:52.052908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.750 11:22:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:05.750 11:22:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3bb75705-a0b7-4d34-b46a-4534a753468d -a 10.0.0.2 -s 4420 -i 4 00:17:06.009 11:22:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.009 11:22:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:06.009 11:22:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.009 11:22:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:06.009 11:22:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.539 [ 0]:0x1 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67a2774730bf44b3abbaba4eaf7e9f84 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67a2774730bf44b3abbaba4eaf7e9f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:08.539 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.540 [ 0]:0x1 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67a2774730bf44b3abbaba4eaf7e9f84 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67a2774730bf44b3abbaba4eaf7e9f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:08.540 [ 1]:0x2 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccecde42cc1042e3b945a1cf67e78b50 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccecde42cc1042e3b945a1cf67e78b50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.540 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:08.798 11:22:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:08.798 11:22:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:08.798 11:22:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3bb75705-a0b7-4d34-b46a-4534a753468d -a 10.0.0.2 -s 4420 -i 4 00:17:09.057 11:22:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:09.057 11:22:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.057 11:22:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.057 11:22:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:09.057 11:22:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:09.057 11:22:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:10.961 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:11.221 [ 0]:0x2 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccecde42cc1042e3b945a1cf67e78b50 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccecde42cc1042e3b945a1cf67e78b50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.221 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.480 [ 0]:0x1 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67a2774730bf44b3abbaba4eaf7e9f84 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67a2774730bf44b3abbaba4eaf7e9f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:11.480 [ 1]:0x2 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccecde42cc1042e3b945a1cf67e78b50 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccecde42cc1042e3b945a1cf67e78b50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.480 11:22:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:11.740 [ 0]:0x2 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:11.740 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:11.999 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccecde42cc1042e3b945a1cf67e78b50 00:17:11.999 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccecde42cc1042e3b945a1cf67e78b50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:11.999 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:11.999 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.999 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:11.999 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:11.999 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3bb75705-a0b7-4d34-b46a-4534a753468d -a 10.0.0.2 -s 4420 -i 4 00:17:12.258 11:22:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:12.258 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:12.258 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.258 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:12.258 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:12.258 11:22:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:14.793 [ 0]:0x1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=67a2774730bf44b3abbaba4eaf7e9f84 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 67a2774730bf44b3abbaba4eaf7e9f84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:14.793 [ 1]:0x2 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccecde42cc1042e3b945a1cf67e78b50 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccecde42cc1042e3b945a1cf67e78b50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:14.793 [ 0]:0x2 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:14.793 11:23:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccecde42cc1042e3b945a1cf67e78b50 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccecde42cc1042e3b945a1cf67e78b50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:14.793 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:15.053 [2024-07-12 11:23:01.212390] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:15.053 request: 00:17:15.053 { 00:17:15.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.053 "nsid": 2, 00:17:15.053 "host": "nqn.2016-06.io.spdk:host1", 00:17:15.053 "method": "nvmf_ns_remove_host", 00:17:15.053 "req_id": 1 00:17:15.053 } 00:17:15.053 Got JSON-RPC error response 00:17:15.053 response: 00:17:15.053 { 00:17:15.053 "code": -32602, 00:17:15.053 "message": "Invalid parameters" 00:17:15.053 } 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:15.053 [ 0]:0x2 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ccecde42cc1042e3b945a1cf67e78b50 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ccecde42cc1042e3b945a1cf67e78b50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:15.053 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=884061 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 884061 /var/tmp/host.sock 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 884061 ']' 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:15.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.313 11:23:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:15.313 [2024-07-12 11:23:01.548230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:15.313 [2024-07-12 11:23:01.548335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884061 ] 00:17:15.313 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.313 [2024-07-12 11:23:01.651720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.572 [2024-07-12 11:23:01.876774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.509 11:23:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.509 11:23:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:16.509 11:23:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.767 11:23:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:17.026 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3984f022-ef06-49f0-98b7-8e97d73e65ec 00:17:17.026 11:23:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:17.026 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3984F022EF0649F098B78E97D73E65EC -i 00:17:17.026 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid af701482-daa8-4fa4-8485-75ade014ffab 00:17:17.026 11:23:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:17.026 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g AF701482DAA84FA4848575ADE014FFAB -i 00:17:17.285 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:17.544 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:17.544 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:17.544 11:23:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:17.803 nvme0n1 00:17:17.803 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:17.803 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:18.062 nvme1n2 00:17:18.062 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:18.062 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:18.062 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:18.062 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:18.062 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:18.321 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:18.321 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:18.321 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:18.321 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:18.580 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3984f022-ef06-49f0-98b7-8e97d73e65ec == \3\9\8\4\f\0\2\2\-\e\f\0\6\-\4\9\f\0\-\9\8\b\7\-\8\e\9\7\d\7\3\e\6\5\e\c ]] 00:17:18.580 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:18.580 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ af701482-daa8-4fa4-8485-75ade014ffab == \a\f\7\0\1\4\8\2\-\d\a\a\8\-\4\f\a\4\-\8\4\8\5\-\7\5\a\d\e\0\1\4\f\f\a\b ]] 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 884061 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 884061 ']' 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 884061 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.581 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884061 00:17:18.840 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:18.840 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:18.840 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884061' 00:17:18.840 killing process with pid 884061 00:17:18.840 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 884061 00:17:18.840 11:23:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 884061 00:17:21.378 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.379 rmmod nvme_tcp 00:17:21.379 rmmod nvme_fabrics 00:17:21.379 rmmod nvme_keyring 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 882044 ']' 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 882044 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 882044 ']' 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 882044 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882044 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882044' 00:17:21.379 killing process with pid 882044 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 882044 00:17:21.379 11:23:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 882044 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.326 11:23:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.234 11:23:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.234 00:17:25.234 real 0m26.899s 00:17:25.234 user 0m30.913s 00:17:25.234 sys 0m6.305s 00:17:25.234 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.234 11:23:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:25.234 ************************************ 00:17:25.234 END TEST nvmf_ns_masking 00:17:25.234 ************************************ 00:17:25.234 11:23:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.234 11:23:11 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:17:25.234 11:23:11 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:25.234 11:23:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.234 11:23:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.234 11:23:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.234 ************************************ 00:17:25.234 START TEST nvmf_nvme_cli 00:17:25.234 ************************************ 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:25.234 * Looking for test storage... 00:17:25.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.234 11:23:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:30.510 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.510 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:30.511 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:30.511 Found net devices under 0000:86:00.0: cvl_0_0 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:30.511 Found net devices under 0000:86:00.1: cvl_0_1 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:17:30.511 00:17:30.511 --- 10.0.0.2 ping statistics --- 00:17:30.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.511 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:17:30.511 00:17:30.511 --- 10.0.0.1 ping statistics --- 00:17:30.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.511 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=888802 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 888802 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 888802 ']' 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.511 11:23:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:30.770 [2024-07-12 11:23:16.919227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:30.770 [2024-07-12 11:23:16.919312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.770 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.770 [2024-07-12 11:23:17.028218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.029 [2024-07-12 11:23:17.251903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.029 [2024-07-12 11:23:17.251948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.029 [2024-07-12 11:23:17.251960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.029 [2024-07-12 11:23:17.251969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.029 [2024-07-12 11:23:17.251977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.029 [2024-07-12 11:23:17.252101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.029 [2024-07-12 11:23:17.252214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.029 [2024-07-12 11:23:17.252304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.029 [2024-07-12 11:23:17.252314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.597 [2024-07-12 11:23:17.732718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.597 Malloc0 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.597 Malloc1 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.597 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.856 [2024-07-12 11:23:17.960301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.856 11:23:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:31.856 00:17:31.856 Discovery Log Number of Records 2, Generation counter 2 00:17:31.856 =====Discovery Log Entry 0====== 00:17:31.856 trtype: tcp 00:17:31.856 adrfam: ipv4 00:17:31.856 subtype: current discovery subsystem 00:17:31.856 treq: not required 00:17:31.856 portid: 0 00:17:31.856 trsvcid: 4420 00:17:31.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:31.856 traddr: 10.0.0.2 00:17:31.856 eflags: explicit discovery connections, duplicate discovery information 00:17:31.856 sectype: none 00:17:31.856 =====Discovery Log Entry 1====== 00:17:31.856 trtype: tcp 00:17:31.856 adrfam: ipv4 00:17:31.856 subtype: nvme subsystem 00:17:31.856 treq: not required 00:17:31.856 portid: 0 00:17:31.856 trsvcid: 4420 00:17:31.856 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:31.856 traddr: 10.0.0.2 00:17:31.856 eflags: none 00:17:31.856 sectype: none 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:31.856 11:23:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:33.233 11:23:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:33.233 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:33.233 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.233 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:33.233 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:33.233 11:23:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:35.131 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:35.132 /dev/nvme0n1 ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:35.132 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.390 rmmod nvme_tcp 00:17:35.390 rmmod nvme_fabrics 00:17:35.390 rmmod nvme_keyring 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 888802 ']' 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 888802 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 888802 ']' 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 888802 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.390 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 888802 00:17:35.648 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:35.648 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:35.648 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 888802' 00:17:35.648 killing process with pid 888802 00:17:35.648 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 888802 00:17:35.648 11:23:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 888802 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.548 11:23:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.449 11:23:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:39.449 00:17:39.449 real 0m14.134s 00:17:39.449 user 0m24.719s 00:17:39.449 sys 0m4.735s 00:17:39.449 11:23:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.449 11:23:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.449 ************************************ 00:17:39.449 END TEST nvmf_nvme_cli 00:17:39.449 ************************************ 00:17:39.449 11:23:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:39.449 11:23:25 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:17:39.449 11:23:25 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:39.449 11:23:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:39.449 11:23:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.449 11:23:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.449 ************************************ 00:17:39.449 START TEST nvmf_host_management 00:17:39.449 ************************************ 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:39.449 * Looking for test storage... 00:17:39.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:39.449 11:23:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.720 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:44.721 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:44.721 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:44.721 Found net devices under 0000:86:00.0: cvl_0_0 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:44.721 Found net devices under 0000:86:00.1: cvl_0_1 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:44.721 11:23:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.721 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.721 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.721 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:44.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:17:44.721 00:17:44.721 --- 10.0.0.2 ping statistics --- 00:17:44.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.721 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:44.721 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:17:44.980 00:17:44.980 --- 10.0.0.1 ping statistics --- 00:17:44.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.980 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=893436 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 893436 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 893436 ']' 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.980 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:44.980 [2024-07-12 11:23:31.199941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:44.981 [2024-07-12 11:23:31.200021] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.981 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.981 [2024-07-12 11:23:31.307334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.239 [2024-07-12 11:23:31.537274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.239 [2024-07-12 11:23:31.537317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.239 [2024-07-12 11:23:31.537329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.239 [2024-07-12 11:23:31.537338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.239 [2024-07-12 11:23:31.537347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.239 [2024-07-12 11:23:31.537472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.239 [2024-07-12 11:23:31.537539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.239 [2024-07-12 11:23:31.537665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.239 [2024-07-12 11:23:31.537689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:45.807 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.807 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:45.807 11:23:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:45.807 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.807 11:23:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.807 [2024-07-12 11:23:32.020113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:45.807 Malloc0 00:17:45.807 [2024-07-12 11:23:32.147766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.807 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=893573 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 893573 /var/tmp/bdevperf.sock 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 893573 ']' 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:46.067 { 00:17:46.067 "params": { 00:17:46.067 "name": "Nvme$subsystem", 00:17:46.067 "trtype": "$TEST_TRANSPORT", 00:17:46.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:46.067 "adrfam": "ipv4", 00:17:46.067 "trsvcid": "$NVMF_PORT", 00:17:46.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:46.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:46.067 "hdgst": ${hdgst:-false}, 00:17:46.067 "ddgst": ${ddgst:-false} 00:17:46.067 }, 00:17:46.067 "method": "bdev_nvme_attach_controller" 00:17:46.067 } 00:17:46.067 EOF 00:17:46.067 )") 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:46.067 11:23:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:46.067 "params": { 00:17:46.067 "name": "Nvme0", 00:17:46.067 "trtype": "tcp", 00:17:46.067 "traddr": "10.0.0.2", 00:17:46.067 "adrfam": "ipv4", 00:17:46.067 "trsvcid": "4420", 00:17:46.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:46.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:46.067 "hdgst": false, 00:17:46.067 "ddgst": false 00:17:46.067 }, 00:17:46.067 "method": "bdev_nvme_attach_controller" 00:17:46.067 }' 00:17:46.067 [2024-07-12 11:23:32.266366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:46.067 [2024-07-12 11:23:32.266470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893573 ] 00:17:46.067 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.067 [2024-07-12 11:23:32.372314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.326 [2024-07-12 11:23:32.590907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.892 Running I/O for 10 seconds... 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.892 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:47.151 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.151 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:17:47.151 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:17:47.151 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=595 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 595 -ge 100 ']' 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:47.411 [2024-07-12 11:23:33.580667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:47.411 [2024-07-12 11:23:33.580742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:47.411 [2024-07-12 11:23:33.580753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.411 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:47.411 [2024-07-12 11:23:33.586885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.411 [2024-07-12 11:23:33.586929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.411 [2024-07-12 11:23:33.586949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.411 [2024-07-12 11:23:33.586960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.586971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.412 [2024-07-12 11:23:33.586981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.586992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.412 [2024-07-12 11:23:33.587002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.587019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:17:47.412 [2024-07-12 11:23:33.592411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.592983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.592992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.593003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.593012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.593024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.593034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.593048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.593058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.593069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.593079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.593091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.593101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.593112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.412 [2024-07-12 11:23:33.593122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.412 [2024-07-12 11:23:33.593133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.593782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.413 [2024-07-12 11:23:33.593800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.413 [2024-07-12 11:23:33.594099] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032da00 was disconnected and freed. reset controller. 00:17:47.413 [2024-07-12 11:23:33.595072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:47.413 task offset: 90112 on job bdev=Nvme0n1 fails 00:17:47.413 00:17:47.413 Latency(us) 00:17:47.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.413 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:47.413 Job: Nvme0n1 ended in about 0.41 seconds with error 00:17:47.413 Verification LBA range: start 0x0 length 0x400 00:17:47.413 Nvme0n1 : 0.41 1711.14 106.95 155.56 0.00 33311.39 2008.82 31457.28 00:17:47.413 =================================================================================================================== 00:17:47.413 Total : 1711.14 106.95 155.56 0.00 33311.39 2008.82 31457.28 00:17:47.413 11:23:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.413 11:23:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:47.413 [2024-07-12 11:23:33.600411] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:47.413 [2024-07-12 11:23:33.600441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:17:47.413 [2024-07-12 11:23:33.605609] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 893573 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:48.351 { 00:17:48.351 "params": { 00:17:48.351 "name": "Nvme$subsystem", 00:17:48.351 "trtype": "$TEST_TRANSPORT", 00:17:48.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.351 "adrfam": "ipv4", 00:17:48.351 "trsvcid": "$NVMF_PORT", 00:17:48.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.351 "hdgst": ${hdgst:-false}, 00:17:48.351 "ddgst": ${ddgst:-false} 00:17:48.351 }, 00:17:48.351 "method": "bdev_nvme_attach_controller" 00:17:48.351 } 00:17:48.351 EOF 00:17:48.351 )") 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:48.351 11:23:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:48.351 "params": { 00:17:48.351 "name": "Nvme0", 00:17:48.351 "trtype": "tcp", 00:17:48.351 "traddr": "10.0.0.2", 00:17:48.351 "adrfam": "ipv4", 00:17:48.351 "trsvcid": "4420", 00:17:48.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.351 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:48.351 "hdgst": false, 00:17:48.351 "ddgst": false 00:17:48.351 }, 00:17:48.351 "method": "bdev_nvme_attach_controller" 00:17:48.351 }' 00:17:48.351 [2024-07-12 11:23:34.679570] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:48.351 [2024-07-12 11:23:34.679660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893967 ] 00:17:48.610 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.610 [2024-07-12 11:23:34.784233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.869 [2024-07-12 11:23:35.017984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.436 Running I/O for 1 seconds... 00:17:50.374 00:17:50.374 Latency(us) 00:17:50.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.374 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:50.374 Verification LBA range: start 0x0 length 0x400 00:17:50.374 Nvme0n1 : 1.03 1802.09 112.63 0.00 0.00 34924.92 5128.90 30545.47 00:17:50.374 =================================================================================================================== 00:17:50.374 Total : 1802.09 112.63 0.00 0.00 34924.92 5128.90 30545.47 00:17:51.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 893573 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.752 rmmod nvme_tcp 00:17:51.752 rmmod nvme_fabrics 00:17:51.752 rmmod nvme_keyring 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 893436 ']' 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 893436 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 893436 ']' 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 893436 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 893436 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 893436' 00:17:51.752 killing process with pid 893436 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 893436 00:17:51.752 11:23:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 893436 00:17:53.128 [2024-07-12 11:23:39.323210] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.128 11:23:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.663 11:23:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.663 11:23:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:55.663 00:17:55.663 real 0m15.870s 00:17:55.663 user 0m36.273s 00:17:55.663 sys 0m5.421s 00:17:55.663 11:23:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.663 11:23:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:55.663 ************************************ 00:17:55.663 END TEST nvmf_host_management 00:17:55.663 ************************************ 00:17:55.663 11:23:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.663 11:23:41 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:55.663 11:23:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.663 11:23:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.663 11:23:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.663 ************************************ 00:17:55.663 START TEST nvmf_lvol 00:17:55.663 ************************************ 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:55.663 * Looking for test storage... 00:17:55.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.663 11:23:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:00.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:00.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:00.987 Found net devices under 0000:86:00.0: cvl_0_0 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.987 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:00.988 Found net devices under 0000:86:00.1: cvl_0_1 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:18:00.988 00:18:00.988 --- 10.0.0.2 ping statistics --- 00:18:00.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.988 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:18:00.988 00:18:00.988 --- 10.0.0.1 ping statistics --- 00:18:00.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.988 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=898174 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 898174 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 898174 ']' 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.988 11:23:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:00.988 [2024-07-12 11:23:47.029931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:00.988 [2024-07-12 11:23:47.030017] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.988 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.988 [2024-07-12 11:23:47.139773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:01.247 [2024-07-12 11:23:47.347767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.247 [2024-07-12 11:23:47.347809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.247 [2024-07-12 11:23:47.347825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.247 [2024-07-12 11:23:47.347833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.247 [2024-07-12 11:23:47.347843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.247 [2024-07-12 11:23:47.347912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.247 [2024-07-12 11:23:47.347977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.247 [2024-07-12 11:23:47.347985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.506 11:23:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.506 11:23:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:18:01.506 11:23:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.506 11:23:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.506 11:23:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:01.506 11:23:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.506 11:23:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.764 [2024-07-12 11:23:48.015426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.764 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:02.022 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:02.022 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:02.281 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:02.281 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:02.540 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:02.800 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3acdae36-7b65-466d-82e9-2376d87d897e 00:18:02.800 11:23:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3acdae36-7b65-466d-82e9-2376d87d897e lvol 20 00:18:02.800 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a019a011-6cb8-484e-adbe-7fa4078d5d91 00:18:02.800 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:03.058 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a019a011-6cb8-484e-adbe-7fa4078d5d91 00:18:03.317 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:03.318 [2024-07-12 11:23:49.618677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.318 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:03.577 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=898665 00:18:03.577 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:03.577 11:23:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:03.577 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.514 11:23:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a019a011-6cb8-484e-adbe-7fa4078d5d91 MY_SNAPSHOT 00:18:04.773 11:23:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8f0bfa4a-8d7c-41ac-9970-03965ffed32a 00:18:04.773 11:23:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a019a011-6cb8-484e-adbe-7fa4078d5d91 30 00:18:05.032 11:23:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8f0bfa4a-8d7c-41ac-9970-03965ffed32a MY_CLONE 00:18:05.290 11:23:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=00a899d5-3d6c-4fb2-9e4f-81e45768294f 00:18:05.290 11:23:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 00a899d5-3d6c-4fb2-9e4f-81e45768294f 00:18:05.856 11:23:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 898665 00:18:13.983 Initializing NVMe Controllers 00:18:13.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:13.983 Controller IO queue size 128, less than required. 00:18:13.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:13.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:13.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:13.983 Initialization complete. Launching workers. 00:18:13.983 ======================================================== 00:18:13.983 Latency(us) 00:18:13.983 Device Information : IOPS MiB/s Average min max 00:18:13.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11043.48 43.14 11598.63 260.17 180841.11 00:18:13.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10795.89 42.17 11857.50 4522.45 147663.75 00:18:13.983 ======================================================== 00:18:13.983 Total : 21839.37 85.31 11726.60 260.17 180841.11 00:18:13.983 00:18:14.242 11:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:14.242 11:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a019a011-6cb8-484e-adbe-7fa4078d5d91 00:18:14.501 11:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3acdae36-7b65-466d-82e9-2376d87d897e 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.760 rmmod nvme_tcp 00:18:14.760 rmmod nvme_fabrics 00:18:14.760 rmmod nvme_keyring 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 898174 ']' 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 898174 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 898174 ']' 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 898174 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.760 11:24:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 898174 00:18:14.760 11:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:14.760 11:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:14.760 11:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 898174' 00:18:14.760 killing process with pid 898174 00:18:14.760 11:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 898174 00:18:14.760 11:24:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 898174 00:18:16.662 11:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.662 11:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.662 11:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.662 11:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.662 11:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.663 11:24:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.663 11:24:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.663 11:24:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.565 11:24:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:18.565 00:18:18.565 real 0m23.291s 00:18:18.565 user 1m7.861s 00:18:18.565 sys 0m6.842s 00:18:18.565 11:24:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:18.565 11:24:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:18.565 ************************************ 00:18:18.565 END TEST nvmf_lvol 00:18:18.565 ************************************ 00:18:18.565 11:24:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:18.565 11:24:04 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:18.565 11:24:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:18.565 11:24:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.565 11:24:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:18.565 ************************************ 00:18:18.565 START TEST nvmf_lvs_grow 00:18:18.565 ************************************ 00:18:18.565 11:24:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:18.824 * Looking for test storage... 00:18:18.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.824 11:24:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.824 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.824 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.824 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.824 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.824 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.824 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.825 11:24:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:24.095 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:24.095 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:24.095 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:24.096 Found net devices under 0000:86:00.0: cvl_0_0 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:24.096 Found net devices under 0000:86:00.1: cvl_0_1 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.096 11:24:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:24.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:18:24.096 00:18:24.096 --- 10.0.0.2 ping statistics --- 00:18:24.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.096 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:18:24.096 00:18:24.096 --- 10.0.0.1 ping statistics --- 00:18:24.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.096 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=904670 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 904670 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 904670 ']' 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.096 11:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:24.096 [2024-07-12 11:24:10.325766] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:24.096 [2024-07-12 11:24:10.325848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.096 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.096 [2024-07-12 11:24:10.435254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.356 [2024-07-12 11:24:10.647257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.356 [2024-07-12 11:24:10.647302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.356 [2024-07-12 11:24:10.647313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.356 [2024-07-12 11:24:10.647324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.356 [2024-07-12 11:24:10.647333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.356 [2024-07-12 11:24:10.647359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.924 11:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.924 11:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:18:24.924 11:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.924 11:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.924 11:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:24.924 11:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.924 11:24:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:25.183 [2024-07-12 11:24:11.282344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:25.183 ************************************ 00:18:25.183 START TEST lvs_grow_clean 00:18:25.183 ************************************ 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:25.183 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:25.442 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:25.442 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:25.442 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:25.442 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:25.442 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:25.702 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:25.702 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:25.702 11:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cebfa872-15bf-4eb5-8139-ea7ef0413bed lvol 150 00:18:25.961 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=589cf87f-5f71-45cc-82f2-73597850255e 00:18:25.961 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:25.961 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:25.961 [2024-07-12 11:24:12.224267] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:25.961 [2024-07-12 11:24:12.224340] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:25.961 true 00:18:25.961 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:25.961 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:26.220 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:26.220 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:26.479 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 589cf87f-5f71-45cc-82f2-73597850255e 00:18:26.479 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:26.738 [2024-07-12 11:24:12.890424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.739 11:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=905167 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 905167 /var/tmp/bdevperf.sock 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 905167 ']' 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.739 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:26.997 [2024-07-12 11:24:13.133889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:26.997 [2024-07-12 11:24:13.133979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905167 ] 00:18:26.997 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.997 [2024-07-12 11:24:13.237905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.255 [2024-07-12 11:24:13.459832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.821 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.821 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:18:27.821 11:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:28.080 Nvme0n1 00:18:28.080 11:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:28.338 [ 00:18:28.338 { 00:18:28.338 "name": "Nvme0n1", 00:18:28.338 "aliases": [ 00:18:28.338 "589cf87f-5f71-45cc-82f2-73597850255e" 00:18:28.338 ], 00:18:28.338 "product_name": "NVMe disk", 00:18:28.338 "block_size": 4096, 00:18:28.338 "num_blocks": 38912, 00:18:28.338 "uuid": "589cf87f-5f71-45cc-82f2-73597850255e", 00:18:28.338 "assigned_rate_limits": { 00:18:28.338 "rw_ios_per_sec": 0, 00:18:28.338 "rw_mbytes_per_sec": 0, 00:18:28.338 "r_mbytes_per_sec": 0, 00:18:28.338 "w_mbytes_per_sec": 0 00:18:28.338 }, 00:18:28.338 "claimed": false, 00:18:28.338 "zoned": false, 00:18:28.338 "supported_io_types": { 00:18:28.338 "read": true, 00:18:28.338 "write": true, 00:18:28.338 "unmap": true, 00:18:28.338 "flush": true, 00:18:28.338 "reset": true, 00:18:28.338 "nvme_admin": true, 00:18:28.338 "nvme_io": true, 00:18:28.338 "nvme_io_md": false, 00:18:28.338 "write_zeroes": true, 00:18:28.338 "zcopy": false, 00:18:28.338 "get_zone_info": false, 00:18:28.338 "zone_management": false, 00:18:28.338 "zone_append": false, 00:18:28.338 "compare": true, 00:18:28.338 "compare_and_write": true, 00:18:28.338 "abort": true, 00:18:28.338 "seek_hole": false, 00:18:28.338 "seek_data": false, 00:18:28.338 "copy": true, 00:18:28.338 "nvme_iov_md": false 00:18:28.338 }, 00:18:28.338 "memory_domains": [ 00:18:28.338 { 00:18:28.338 "dma_device_id": "system", 00:18:28.338 "dma_device_type": 1 00:18:28.338 } 00:18:28.338 ], 00:18:28.338 "driver_specific": { 00:18:28.338 "nvme": [ 00:18:28.338 { 00:18:28.338 "trid": { 00:18:28.338 "trtype": "TCP", 00:18:28.338 "adrfam": "IPv4", 00:18:28.338 "traddr": "10.0.0.2", 00:18:28.338 "trsvcid": "4420", 00:18:28.338 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:28.338 }, 00:18:28.338 "ctrlr_data": { 00:18:28.338 "cntlid": 1, 00:18:28.338 "vendor_id": "0x8086", 00:18:28.338 "model_number": "SPDK bdev Controller", 00:18:28.338 "serial_number": "SPDK0", 00:18:28.338 "firmware_revision": "24.09", 00:18:28.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:28.338 "oacs": { 00:18:28.338 "security": 0, 00:18:28.338 "format": 0, 00:18:28.338 "firmware": 0, 00:18:28.338 "ns_manage": 0 00:18:28.338 }, 00:18:28.338 "multi_ctrlr": true, 00:18:28.338 "ana_reporting": false 00:18:28.338 }, 00:18:28.338 "vs": { 00:18:28.338 "nvme_version": "1.3" 00:18:28.338 }, 00:18:28.338 "ns_data": { 00:18:28.338 "id": 1, 00:18:28.338 "can_share": true 00:18:28.338 } 00:18:28.338 } 00:18:28.339 ], 00:18:28.339 "mp_policy": "active_passive" 00:18:28.339 } 00:18:28.339 } 00:18:28.339 ] 00:18:28.339 11:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.339 11:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=905405 00:18:28.339 11:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:28.339 Running I/O for 10 seconds... 00:18:29.274 Latency(us) 00:18:29.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.274 Nvme0n1 : 1.00 20008.00 78.16 0.00 0.00 0.00 0.00 0.00 00:18:29.274 =================================================================================================================== 00:18:29.274 Total : 20008.00 78.16 0.00 0.00 0.00 0.00 0.00 00:18:29.274 00:18:30.212 11:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:30.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.212 Nvme0n1 : 2.00 20142.50 78.68 0.00 0.00 0.00 0.00 0.00 00:18:30.212 =================================================================================================================== 00:18:30.212 Total : 20142.50 78.68 0.00 0.00 0.00 0.00 0.00 00:18:30.212 00:18:30.470 true 00:18:30.470 11:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:30.470 11:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:30.728 11:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:30.729 11:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:30.729 11:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 905405 00:18:31.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.296 Nvme0n1 : 3.00 20163.33 78.76 0.00 0.00 0.00 0.00 0.00 00:18:31.296 =================================================================================================================== 00:18:31.296 Total : 20163.33 78.76 0.00 0.00 0.00 0.00 0.00 00:18:31.296 00:18:32.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.232 Nvme0n1 : 4.00 20202.50 78.92 0.00 0.00 0.00 0.00 0.00 00:18:32.232 =================================================================================================================== 00:18:32.232 Total : 20202.50 78.92 0.00 0.00 0.00 0.00 0.00 00:18:32.232 00:18:33.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.291 Nvme0n1 : 5.00 20238.80 79.06 0.00 0.00 0.00 0.00 0.00 00:18:33.291 =================================================================================================================== 00:18:33.291 Total : 20238.80 79.06 0.00 0.00 0.00 0.00 0.00 00:18:33.291 00:18:34.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.227 Nvme0n1 : 6.00 20252.83 79.11 0.00 0.00 0.00 0.00 0.00 00:18:34.227 =================================================================================================================== 00:18:34.227 Total : 20252.83 79.11 0.00 0.00 0.00 0.00 0.00 00:18:34.227 00:18:35.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.606 Nvme0n1 : 7.00 20262.43 79.15 0.00 0.00 0.00 0.00 0.00 00:18:35.606 =================================================================================================================== 00:18:35.606 Total : 20262.43 79.15 0.00 0.00 0.00 0.00 0.00 00:18:35.606 00:18:36.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.543 Nvme0n1 : 8.00 20286.38 79.24 0.00 0.00 0.00 0.00 0.00 00:18:36.543 =================================================================================================================== 00:18:36.543 Total : 20286.38 79.24 0.00 0.00 0.00 0.00 0.00 00:18:36.543 00:18:37.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.480 Nvme0n1 : 9.00 20305.33 79.32 0.00 0.00 0.00 0.00 0.00 00:18:37.480 =================================================================================================================== 00:18:37.480 Total : 20305.33 79.32 0.00 0.00 0.00 0.00 0.00 00:18:37.480 00:18:38.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.417 Nvme0n1 : 10.00 20321.50 79.38 0.00 0.00 0.00 0.00 0.00 00:18:38.417 =================================================================================================================== 00:18:38.417 Total : 20321.50 79.38 0.00 0.00 0.00 0.00 0.00 00:18:38.417 00:18:38.417 00:18:38.417 Latency(us) 00:18:38.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.417 Nvme0n1 : 10.00 20317.77 79.37 0.00 0.00 6295.81 2721.17 12594.31 00:18:38.417 =================================================================================================================== 00:18:38.417 Total : 20317.77 79.37 0.00 0.00 6295.81 2721.17 12594.31 00:18:38.418 0 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 905167 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 905167 ']' 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 905167 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 905167 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 905167' 00:18:38.418 killing process with pid 905167 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 905167 00:18:38.418 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.418 00:18:38.418 Latency(us) 00:18:38.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.418 =================================================================================================================== 00:18:38.418 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.418 11:24:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 905167 00:18:39.355 11:24:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:39.613 11:24:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:39.870 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:39.870 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:40.128 [2024-07-12 11:24:26.392204] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:40.128 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.129 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:40.129 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.129 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:40.129 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.129 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:40.129 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:40.386 request: 00:18:40.386 { 00:18:40.386 "uuid": "cebfa872-15bf-4eb5-8139-ea7ef0413bed", 00:18:40.386 "method": "bdev_lvol_get_lvstores", 00:18:40.386 "req_id": 1 00:18:40.386 } 00:18:40.386 Got JSON-RPC error response 00:18:40.386 response: 00:18:40.386 { 00:18:40.386 "code": -19, 00:18:40.386 "message": "No such device" 00:18:40.386 } 00:18:40.386 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:18:40.386 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:40.386 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:40.386 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:40.386 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:40.644 aio_bdev 00:18:40.644 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 589cf87f-5f71-45cc-82f2-73597850255e 00:18:40.644 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=589cf87f-5f71-45cc-82f2-73597850255e 00:18:40.644 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:40.644 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:18:40.644 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:40.644 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:40.644 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:40.645 11:24:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 589cf87f-5f71-45cc-82f2-73597850255e -t 2000 00:18:40.903 [ 00:18:40.903 { 00:18:40.903 "name": "589cf87f-5f71-45cc-82f2-73597850255e", 00:18:40.903 "aliases": [ 00:18:40.903 "lvs/lvol" 00:18:40.903 ], 00:18:40.903 "product_name": "Logical Volume", 00:18:40.903 "block_size": 4096, 00:18:40.903 "num_blocks": 38912, 00:18:40.903 "uuid": "589cf87f-5f71-45cc-82f2-73597850255e", 00:18:40.903 "assigned_rate_limits": { 00:18:40.903 "rw_ios_per_sec": 0, 00:18:40.903 "rw_mbytes_per_sec": 0, 00:18:40.903 "r_mbytes_per_sec": 0, 00:18:40.903 "w_mbytes_per_sec": 0 00:18:40.903 }, 00:18:40.903 "claimed": false, 00:18:40.903 "zoned": false, 00:18:40.903 "supported_io_types": { 00:18:40.903 "read": true, 00:18:40.903 "write": true, 00:18:40.903 "unmap": true, 00:18:40.903 "flush": false, 00:18:40.903 "reset": true, 00:18:40.903 "nvme_admin": false, 00:18:40.903 "nvme_io": false, 00:18:40.903 "nvme_io_md": false, 00:18:40.903 "write_zeroes": true, 00:18:40.903 "zcopy": false, 00:18:40.903 "get_zone_info": false, 00:18:40.903 "zone_management": false, 00:18:40.903 "zone_append": false, 00:18:40.903 "compare": false, 00:18:40.903 "compare_and_write": false, 00:18:40.903 "abort": false, 00:18:40.903 "seek_hole": true, 00:18:40.903 "seek_data": true, 00:18:40.903 "copy": false, 00:18:40.903 "nvme_iov_md": false 00:18:40.903 }, 00:18:40.903 "driver_specific": { 00:18:40.903 "lvol": { 00:18:40.903 "lvol_store_uuid": "cebfa872-15bf-4eb5-8139-ea7ef0413bed", 00:18:40.903 "base_bdev": "aio_bdev", 00:18:40.903 "thin_provision": false, 00:18:40.903 "num_allocated_clusters": 38, 00:18:40.903 "snapshot": false, 00:18:40.903 "clone": false, 00:18:40.903 "esnap_clone": false 00:18:40.903 } 00:18:40.903 } 00:18:40.903 } 00:18:40.903 ] 00:18:40.903 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:18:40.903 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:40.903 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:41.161 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:41.161 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:41.161 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:41.161 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:41.161 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 589cf87f-5f71-45cc-82f2-73597850255e 00:18:41.420 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cebfa872-15bf-4eb5-8139-ea7ef0413bed 00:18:41.678 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:41.678 11:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.678 00:18:41.678 real 0m16.670s 00:18:41.678 user 0m16.375s 00:18:41.678 sys 0m1.411s 00:18:41.678 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.678 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:41.678 ************************************ 00:18:41.678 END TEST lvs_grow_clean 00:18:41.678 ************************************ 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:41.937 ************************************ 00:18:41.937 START TEST lvs_grow_dirty 00:18:41.937 ************************************ 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:41.937 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:42.196 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:42.196 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:42.196 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:42.455 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:42.455 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:42.455 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2a203fee-a7b1-4688-b443-2bcb529f7741 lvol 150 00:18:42.455 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=eb8f364e-4921-41ff-9d38-23071df709fc 00:18:42.455 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:42.455 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:42.714 [2024-07-12 11:24:28.924532] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:42.714 [2024-07-12 11:24:28.924597] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:42.714 true 00:18:42.714 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:42.714 11:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:42.972 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:42.972 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:42.972 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eb8f364e-4921-41ff-9d38-23071df709fc 00:18:43.231 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:43.490 [2024-07-12 11:24:29.622711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=907885 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 907885 /var/tmp/bdevperf.sock 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 907885 ']' 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.490 11:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:43.747 [2024-07-12 11:24:29.867364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:43.747 [2024-07-12 11:24:29.867478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907885 ] 00:18:43.747 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.747 [2024-07-12 11:24:29.971953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.004 [2024-07-12 11:24:30.198438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.569 11:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.569 11:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:44.569 11:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:44.828 Nvme0n1 00:18:44.828 11:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:44.828 [ 00:18:44.828 { 00:18:44.828 "name": "Nvme0n1", 00:18:44.828 "aliases": [ 00:18:44.828 "eb8f364e-4921-41ff-9d38-23071df709fc" 00:18:44.828 ], 00:18:44.828 "product_name": "NVMe disk", 00:18:44.828 "block_size": 4096, 00:18:44.828 "num_blocks": 38912, 00:18:44.828 "uuid": "eb8f364e-4921-41ff-9d38-23071df709fc", 00:18:44.828 "assigned_rate_limits": { 00:18:44.828 "rw_ios_per_sec": 0, 00:18:44.828 "rw_mbytes_per_sec": 0, 00:18:44.828 "r_mbytes_per_sec": 0, 00:18:44.828 "w_mbytes_per_sec": 0 00:18:44.828 }, 00:18:44.828 "claimed": false, 00:18:44.828 "zoned": false, 00:18:44.828 "supported_io_types": { 00:18:44.828 "read": true, 00:18:44.828 "write": true, 00:18:44.828 "unmap": true, 00:18:44.828 "flush": true, 00:18:44.828 "reset": true, 00:18:44.828 "nvme_admin": true, 00:18:44.828 "nvme_io": true, 00:18:44.828 "nvme_io_md": false, 00:18:44.828 "write_zeroes": true, 00:18:44.828 "zcopy": false, 00:18:44.828 "get_zone_info": false, 00:18:44.828 "zone_management": false, 00:18:44.828 "zone_append": false, 00:18:44.828 "compare": true, 00:18:44.828 "compare_and_write": true, 00:18:44.828 "abort": true, 00:18:44.828 "seek_hole": false, 00:18:44.828 "seek_data": false, 00:18:44.828 "copy": true, 00:18:44.828 "nvme_iov_md": false 00:18:44.828 }, 00:18:44.828 "memory_domains": [ 00:18:44.828 { 00:18:44.828 "dma_device_id": "system", 00:18:44.828 "dma_device_type": 1 00:18:44.828 } 00:18:44.828 ], 00:18:44.828 "driver_specific": { 00:18:44.828 "nvme": [ 00:18:44.828 { 00:18:44.828 "trid": { 00:18:44.828 "trtype": "TCP", 00:18:44.828 "adrfam": "IPv4", 00:18:44.828 "traddr": "10.0.0.2", 00:18:44.828 "trsvcid": "4420", 00:18:44.828 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:44.828 }, 00:18:44.828 "ctrlr_data": { 00:18:44.828 "cntlid": 1, 00:18:44.828 "vendor_id": "0x8086", 00:18:44.828 "model_number": "SPDK bdev Controller", 00:18:44.828 "serial_number": "SPDK0", 00:18:44.828 "firmware_revision": "24.09", 00:18:44.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:44.828 "oacs": { 00:18:44.828 "security": 0, 00:18:44.828 "format": 0, 00:18:44.828 "firmware": 0, 00:18:44.828 "ns_manage": 0 00:18:44.828 }, 00:18:44.828 "multi_ctrlr": true, 00:18:44.828 "ana_reporting": false 00:18:44.828 }, 00:18:44.828 "vs": { 00:18:44.828 "nvme_version": "1.3" 00:18:44.828 }, 00:18:44.828 "ns_data": { 00:18:44.828 "id": 1, 00:18:44.828 "can_share": true 00:18:44.828 } 00:18:44.828 } 00:18:44.828 ], 00:18:44.828 "mp_policy": "active_passive" 00:18:44.828 } 00:18:44.828 } 00:18:44.828 ] 00:18:44.828 11:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.828 11:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=908106 00:18:44.828 11:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:45.087 Running I/O for 10 seconds... 00:18:46.024 Latency(us) 00:18:46.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.024 Nvme0n1 : 1.00 20021.00 78.21 0.00 0.00 0.00 0.00 0.00 00:18:46.024 =================================================================================================================== 00:18:46.024 Total : 20021.00 78.21 0.00 0.00 0.00 0.00 0.00 00:18:46.024 00:18:46.960 11:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:46.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.960 Nvme0n1 : 2.00 20107.00 78.54 0.00 0.00 0.00 0.00 0.00 00:18:46.960 =================================================================================================================== 00:18:46.960 Total : 20107.00 78.54 0.00 0.00 0.00 0.00 0.00 00:18:46.960 00:18:47.219 true 00:18:47.219 11:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:47.219 11:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:47.219 11:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:47.219 11:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:47.219 11:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 908106 00:18:48.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.156 Nvme0n1 : 3.00 20121.67 78.60 0.00 0.00 0.00 0.00 0.00 00:18:48.156 =================================================================================================================== 00:18:48.156 Total : 20121.67 78.60 0.00 0.00 0.00 0.00 0.00 00:18:48.156 00:18:49.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.093 Nvme0n1 : 4.00 20187.75 78.86 0.00 0.00 0.00 0.00 0.00 00:18:49.093 =================================================================================================================== 00:18:49.093 Total : 20187.75 78.86 0.00 0.00 0.00 0.00 0.00 00:18:49.093 00:18:50.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.030 Nvme0n1 : 5.00 20214.20 78.96 0.00 0.00 0.00 0.00 0.00 00:18:50.030 =================================================================================================================== 00:18:50.030 Total : 20214.20 78.96 0.00 0.00 0.00 0.00 0.00 00:18:50.030 00:18:51.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.010 Nvme0n1 : 6.00 20243.00 79.07 0.00 0.00 0.00 0.00 0.00 00:18:51.010 =================================================================================================================== 00:18:51.010 Total : 20243.00 79.07 0.00 0.00 0.00 0.00 0.00 00:18:51.010 00:18:51.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.944 Nvme0n1 : 7.00 20272.43 79.19 0.00 0.00 0.00 0.00 0.00 00:18:51.944 =================================================================================================================== 00:18:51.944 Total : 20272.43 79.19 0.00 0.00 0.00 0.00 0.00 00:18:51.944 00:18:53.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.319 Nvme0n1 : 8.00 20282.75 79.23 0.00 0.00 0.00 0.00 0.00 00:18:53.319 =================================================================================================================== 00:18:53.319 Total : 20282.75 79.23 0.00 0.00 0.00 0.00 0.00 00:18:53.319 00:18:54.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.257 Nvme0n1 : 9.00 20280.11 79.22 0.00 0.00 0.00 0.00 0.00 00:18:54.257 =================================================================================================================== 00:18:54.257 Total : 20280.11 79.22 0.00 0.00 0.00 0.00 0.00 00:18:54.257 00:18:55.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.193 Nvme0n1 : 10.00 20285.30 79.24 0.00 0.00 0.00 0.00 0.00 00:18:55.193 =================================================================================================================== 00:18:55.193 Total : 20285.30 79.24 0.00 0.00 0.00 0.00 0.00 00:18:55.193 00:18:55.193 00:18:55.193 Latency(us) 00:18:55.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.193 Nvme0n1 : 10.00 20288.48 79.25 0.00 0.00 6305.82 1994.57 11340.58 00:18:55.194 =================================================================================================================== 00:18:55.194 Total : 20288.48 79.25 0.00 0.00 6305.82 1994.57 11340.58 00:18:55.194 0 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 907885 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 907885 ']' 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 907885 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 907885 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 907885' 00:18:55.194 killing process with pid 907885 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 907885 00:18:55.194 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.194 00:18:55.194 Latency(us) 00:18:55.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.194 =================================================================================================================== 00:18:55.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.194 11:24:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 907885 00:18:56.131 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:56.390 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:56.390 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:56.390 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 904670 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 904670 00:18:56.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 904670 Killed "${NVMF_APP[@]}" "$@" 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=910154 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 910154 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 910154 ']' 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.650 11:24:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:56.910 [2024-07-12 11:24:43.060909] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:56.910 [2024-07-12 11:24:43.060995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.910 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.910 [2024-07-12 11:24:43.173639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.168 [2024-07-12 11:24:43.379793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.168 [2024-07-12 11:24:43.379833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.168 [2024-07-12 11:24:43.379847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.168 [2024-07-12 11:24:43.379858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.168 [2024-07-12 11:24:43.379868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.168 [2024-07-12 11:24:43.379894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.737 11:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.737 11:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:57.737 11:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.737 11:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.737 11:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:57.737 11:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.737 11:24:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:57.737 [2024-07-12 11:24:44.025106] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:57.737 [2024-07-12 11:24:44.025250] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:57.737 [2024-07-12 11:24:44.025290] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev eb8f364e-4921-41ff-9d38-23071df709fc 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=eb8f364e-4921-41ff-9d38-23071df709fc 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:57.737 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:57.995 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb8f364e-4921-41ff-9d38-23071df709fc -t 2000 00:18:58.253 [ 00:18:58.253 { 00:18:58.253 "name": "eb8f364e-4921-41ff-9d38-23071df709fc", 00:18:58.253 "aliases": [ 00:18:58.253 "lvs/lvol" 00:18:58.253 ], 00:18:58.253 "product_name": "Logical Volume", 00:18:58.253 "block_size": 4096, 00:18:58.253 "num_blocks": 38912, 00:18:58.253 "uuid": "eb8f364e-4921-41ff-9d38-23071df709fc", 00:18:58.253 "assigned_rate_limits": { 00:18:58.253 "rw_ios_per_sec": 0, 00:18:58.253 "rw_mbytes_per_sec": 0, 00:18:58.253 "r_mbytes_per_sec": 0, 00:18:58.253 "w_mbytes_per_sec": 0 00:18:58.253 }, 00:18:58.253 "claimed": false, 00:18:58.253 "zoned": false, 00:18:58.253 "supported_io_types": { 00:18:58.253 "read": true, 00:18:58.253 "write": true, 00:18:58.253 "unmap": true, 00:18:58.253 "flush": false, 00:18:58.253 "reset": true, 00:18:58.253 "nvme_admin": false, 00:18:58.253 "nvme_io": false, 00:18:58.253 "nvme_io_md": false, 00:18:58.253 "write_zeroes": true, 00:18:58.253 "zcopy": false, 00:18:58.253 "get_zone_info": false, 00:18:58.253 "zone_management": false, 00:18:58.253 "zone_append": false, 00:18:58.253 "compare": false, 00:18:58.253 "compare_and_write": false, 00:18:58.253 "abort": false, 00:18:58.253 "seek_hole": true, 00:18:58.253 "seek_data": true, 00:18:58.253 "copy": false, 00:18:58.253 "nvme_iov_md": false 00:18:58.253 }, 00:18:58.253 "driver_specific": { 00:18:58.253 "lvol": { 00:18:58.253 "lvol_store_uuid": "2a203fee-a7b1-4688-b443-2bcb529f7741", 00:18:58.253 "base_bdev": "aio_bdev", 00:18:58.253 "thin_provision": false, 00:18:58.253 "num_allocated_clusters": 38, 00:18:58.253 "snapshot": false, 00:18:58.253 "clone": false, 00:18:58.253 "esnap_clone": false 00:18:58.253 } 00:18:58.253 } 00:18:58.253 } 00:18:58.253 ] 00:18:58.253 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:58.253 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:58.253 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:58.253 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:58.253 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:58.253 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:58.512 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:58.512 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:58.771 [2024-07-12 11:24:44.901410] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:58.771 11:24:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:58.771 request: 00:18:58.771 { 00:18:58.771 "uuid": "2a203fee-a7b1-4688-b443-2bcb529f7741", 00:18:58.771 "method": "bdev_lvol_get_lvstores", 00:18:58.771 "req_id": 1 00:18:58.771 } 00:18:58.771 Got JSON-RPC error response 00:18:58.771 response: 00:18:58.771 { 00:18:58.771 "code": -19, 00:18:58.771 "message": "No such device" 00:18:58.771 } 00:18:58.771 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:58.771 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:58.771 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:58.771 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:58.771 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:59.031 aio_bdev 00:18:59.031 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev eb8f364e-4921-41ff-9d38-23071df709fc 00:18:59.031 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=eb8f364e-4921-41ff-9d38-23071df709fc 00:18:59.031 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:59.031 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:59.031 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:59.031 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:59.031 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:59.293 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb8f364e-4921-41ff-9d38-23071df709fc -t 2000 00:18:59.293 [ 00:18:59.293 { 00:18:59.293 "name": "eb8f364e-4921-41ff-9d38-23071df709fc", 00:18:59.293 "aliases": [ 00:18:59.293 "lvs/lvol" 00:18:59.293 ], 00:18:59.293 "product_name": "Logical Volume", 00:18:59.293 "block_size": 4096, 00:18:59.293 "num_blocks": 38912, 00:18:59.293 "uuid": "eb8f364e-4921-41ff-9d38-23071df709fc", 00:18:59.293 "assigned_rate_limits": { 00:18:59.293 "rw_ios_per_sec": 0, 00:18:59.293 "rw_mbytes_per_sec": 0, 00:18:59.293 "r_mbytes_per_sec": 0, 00:18:59.293 "w_mbytes_per_sec": 0 00:18:59.293 }, 00:18:59.293 "claimed": false, 00:18:59.293 "zoned": false, 00:18:59.293 "supported_io_types": { 00:18:59.293 "read": true, 00:18:59.293 "write": true, 00:18:59.293 "unmap": true, 00:18:59.293 "flush": false, 00:18:59.293 "reset": true, 00:18:59.293 "nvme_admin": false, 00:18:59.293 "nvme_io": false, 00:18:59.293 "nvme_io_md": false, 00:18:59.293 "write_zeroes": true, 00:18:59.293 "zcopy": false, 00:18:59.293 "get_zone_info": false, 00:18:59.293 "zone_management": false, 00:18:59.293 "zone_append": false, 00:18:59.293 "compare": false, 00:18:59.293 "compare_and_write": false, 00:18:59.293 "abort": false, 00:18:59.293 "seek_hole": true, 00:18:59.293 "seek_data": true, 00:18:59.293 "copy": false, 00:18:59.293 "nvme_iov_md": false 00:18:59.293 }, 00:18:59.293 "driver_specific": { 00:18:59.293 "lvol": { 00:18:59.293 "lvol_store_uuid": "2a203fee-a7b1-4688-b443-2bcb529f7741", 00:18:59.293 "base_bdev": "aio_bdev", 00:18:59.293 "thin_provision": false, 00:18:59.293 "num_allocated_clusters": 38, 00:18:59.293 "snapshot": false, 00:18:59.293 "clone": false, 00:18:59.293 "esnap_clone": false 00:18:59.293 } 00:18:59.293 } 00:18:59.293 } 00:18:59.293 ] 00:18:59.293 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:59.293 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:59.293 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:59.639 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:59.639 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:18:59.639 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:59.639 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:59.639 11:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eb8f364e-4921-41ff-9d38-23071df709fc 00:18:59.898 11:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a203fee-a7b1-4688-b443-2bcb529f7741 00:19:00.157 11:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:00.157 11:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:00.157 00:19:00.157 real 0m18.422s 00:19:00.157 user 0m47.644s 00:19:00.157 sys 0m3.702s 00:19:00.157 11:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:00.157 11:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:00.157 ************************************ 00:19:00.157 END TEST lvs_grow_dirty 00:19:00.157 ************************************ 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:00.417 nvmf_trace.0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:00.417 rmmod nvme_tcp 00:19:00.417 rmmod nvme_fabrics 00:19:00.417 rmmod nvme_keyring 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 910154 ']' 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 910154 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 910154 ']' 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 910154 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 910154 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 910154' 00:19:00.417 killing process with pid 910154 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 910154 00:19:00.417 11:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 910154 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.797 11:24:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.703 11:24:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.703 00:19:03.703 real 0m45.135s 00:19:03.703 user 1m10.726s 00:19:03.703 sys 0m9.572s 00:19:03.703 11:24:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:03.703 11:24:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:03.703 ************************************ 00:19:03.703 END TEST nvmf_lvs_grow 00:19:03.703 ************************************ 00:19:03.961 11:24:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:03.961 11:24:50 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:03.961 11:24:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:03.961 11:24:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.961 11:24:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.961 ************************************ 00:19:03.961 START TEST nvmf_bdev_io_wait 00:19:03.961 ************************************ 00:19:03.961 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:03.961 * Looking for test storage... 00:19:03.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.961 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.961 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:19:03.961 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.962 11:24:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.233 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.233 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:19:09.233 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:09.233 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:09.233 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:09.233 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:09.234 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:09.234 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:09.234 Found net devices under 0000:86:00.0: cvl_0_0 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:09.234 Found net devices under 0000:86:00.1: cvl_0_1 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:09.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:19:09.234 00:19:09.234 --- 10.0.0.2 ping statistics --- 00:19:09.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.234 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:09.234 00:19:09.234 --- 10.0.0.1 ping statistics --- 00:19:09.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.234 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.234 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=914197 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 914197 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 914197 ']' 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.235 11:24:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:09.235 [2024-07-12 11:24:54.840539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:09.235 [2024-07-12 11:24:54.840627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.235 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.235 [2024-07-12 11:24:54.950372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.235 [2024-07-12 11:24:55.166522] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.235 [2024-07-12 11:24:55.166567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.235 [2024-07-12 11:24:55.166578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.235 [2024-07-12 11:24:55.166586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.235 [2024-07-12 11:24:55.166611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.235 [2024-07-12 11:24:55.166697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.235 [2024-07-12 11:24:55.166766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.235 [2024-07-12 11:24:55.166823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.235 [2024-07-12 11:24:55.166834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.494 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.755 [2024-07-12 11:24:55.920474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.755 11:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.755 Malloc0 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:09.755 [2024-07-12 11:24:56.051981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=914451 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=914453 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.755 { 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme$subsystem", 00:19:09.755 "trtype": "$TEST_TRANSPORT", 00:19:09.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.755 "adrfam": "ipv4", 00:19:09.755 "trsvcid": "$NVMF_PORT", 00:19:09.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.755 "hdgst": ${hdgst:-false}, 00:19:09.755 "ddgst": ${ddgst:-false} 00:19:09.755 }, 00:19:09.755 "method": "bdev_nvme_attach_controller" 00:19:09.755 } 00:19:09.755 EOF 00:19:09.755 )") 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=914455 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.755 { 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme$subsystem", 00:19:09.755 "trtype": "$TEST_TRANSPORT", 00:19:09.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.755 "adrfam": "ipv4", 00:19:09.755 "trsvcid": "$NVMF_PORT", 00:19:09.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.755 "hdgst": ${hdgst:-false}, 00:19:09.755 "ddgst": ${ddgst:-false} 00:19:09.755 }, 00:19:09.755 "method": "bdev_nvme_attach_controller" 00:19:09.755 } 00:19:09.755 EOF 00:19:09.755 )") 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=914458 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.755 { 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme$subsystem", 00:19:09.755 "trtype": "$TEST_TRANSPORT", 00:19:09.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.755 "adrfam": "ipv4", 00:19:09.755 "trsvcid": "$NVMF_PORT", 00:19:09.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.755 "hdgst": ${hdgst:-false}, 00:19:09.755 "ddgst": ${ddgst:-false} 00:19:09.755 }, 00:19:09.755 "method": "bdev_nvme_attach_controller" 00:19:09.755 } 00:19:09.755 EOF 00:19:09.755 )") 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.755 { 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme$subsystem", 00:19:09.755 "trtype": "$TEST_TRANSPORT", 00:19:09.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.755 "adrfam": "ipv4", 00:19:09.755 "trsvcid": "$NVMF_PORT", 00:19:09.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.755 "hdgst": ${hdgst:-false}, 00:19:09.755 "ddgst": ${ddgst:-false} 00:19:09.755 }, 00:19:09.755 "method": "bdev_nvme_attach_controller" 00:19:09.755 } 00:19:09.755 EOF 00:19:09.755 )") 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 914451 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme1", 00:19:09.755 "trtype": "tcp", 00:19:09.755 "traddr": "10.0.0.2", 00:19:09.755 "adrfam": "ipv4", 00:19:09.755 "trsvcid": "4420", 00:19:09.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.755 "hdgst": false, 00:19:09.755 "ddgst": false 00:19:09.755 }, 00:19:09.755 "method": "bdev_nvme_attach_controller" 00:19:09.755 }' 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme1", 00:19:09.755 "trtype": "tcp", 00:19:09.755 "traddr": "10.0.0.2", 00:19:09.755 "adrfam": "ipv4", 00:19:09.755 "trsvcid": "4420", 00:19:09.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.755 "hdgst": false, 00:19:09.755 "ddgst": false 00:19:09.755 }, 00:19:09.755 "method": "bdev_nvme_attach_controller" 00:19:09.755 }' 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme1", 00:19:09.755 "trtype": "tcp", 00:19:09.755 "traddr": "10.0.0.2", 00:19:09.755 "adrfam": "ipv4", 00:19:09.755 "trsvcid": "4420", 00:19:09.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.755 "hdgst": false, 00:19:09.755 "ddgst": false 00:19:09.755 }, 00:19:09.755 "method": "bdev_nvme_attach_controller" 00:19:09.755 }' 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:09.755 11:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:09.755 "params": { 00:19:09.755 "name": "Nvme1", 00:19:09.755 "trtype": "tcp", 00:19:09.756 "traddr": "10.0.0.2", 00:19:09.756 "adrfam": "ipv4", 00:19:09.756 "trsvcid": "4420", 00:19:09.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.756 "hdgst": false, 00:19:09.756 "ddgst": false 00:19:09.756 }, 00:19:09.756 "method": "bdev_nvme_attach_controller" 00:19:09.756 }' 00:19:10.015 [2024-07-12 11:24:56.126681] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:10.015 [2024-07-12 11:24:56.126777] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:10.015 [2024-07-12 11:24:56.131962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:10.015 [2024-07-12 11:24:56.132013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:10.015 [2024-07-12 11:24:56.132059] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:10.015 [2024-07-12 11:24:56.132097] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:10.015 [2024-07-12 11:24:56.133589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:10.015 [2024-07-12 11:24:56.133668] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:10.015 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.015 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.015 [2024-07-12 11:24:56.304731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.015 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.274 [2024-07-12 11:24:56.397950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.274 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.274 [2024-07-12 11:24:56.456427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.274 [2024-07-12 11:24:56.515841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:10.274 [2024-07-12 11:24:56.553757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.274 [2024-07-12 11:24:56.616611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:10.532 [2024-07-12 11:24:56.665047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:19:10.532 [2024-07-12 11:24:56.771417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:10.790 Running I/O for 1 seconds... 00:19:10.790 Running I/O for 1 seconds... 00:19:10.790 Running I/O for 1 seconds... 00:19:11.048 Running I/O for 1 seconds... 00:19:11.983 00:19:11.983 Latency(us) 00:19:11.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.984 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:11.984 Nvme1n1 : 1.00 215588.09 842.14 0.00 0.00 591.56 240.42 715.91 00:19:11.984 =================================================================================================================== 00:19:11.984 Total : 215588.09 842.14 0.00 0.00 591.56 240.42 715.91 00:19:11.984 00:19:11.984 Latency(us) 00:19:11.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.984 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:11.984 Nvme1n1 : 1.01 11726.73 45.81 0.00 0.00 10880.39 5556.31 17210.32 00:19:11.984 =================================================================================================================== 00:19:11.984 Total : 11726.73 45.81 0.00 0.00 10880.39 5556.31 17210.32 00:19:11.984 00:19:11.984 Latency(us) 00:19:11.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.984 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:11.984 Nvme1n1 : 1.01 9024.67 35.25 0.00 0.00 14117.47 8776.13 21997.30 00:19:11.984 =================================================================================================================== 00:19:11.984 Total : 9024.67 35.25 0.00 0.00 14117.47 8776.13 21997.30 00:19:11.984 00:19:11.984 Latency(us) 00:19:11.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.984 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:11.984 Nvme1n1 : 1.01 9059.57 35.39 0.00 0.00 14080.54 5499.33 25302.59 00:19:11.984 =================================================================================================================== 00:19:11.984 Total : 9059.57 35.39 0.00 0.00 14080.54 5499.33 25302.59 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 914453 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 914455 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 914458 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.360 rmmod nvme_tcp 00:19:13.360 rmmod nvme_fabrics 00:19:13.360 rmmod nvme_keyring 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 914197 ']' 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 914197 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 914197 ']' 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 914197 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 914197 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 914197' 00:19:13.360 killing process with pid 914197 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 914197 00:19:13.360 11:24:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 914197 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.737 11:25:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.640 11:25:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:16.640 00:19:16.640 real 0m12.746s 00:19:16.640 user 0m32.669s 00:19:16.640 sys 0m5.409s 00:19:16.640 11:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.640 11:25:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:16.640 ************************************ 00:19:16.640 END TEST nvmf_bdev_io_wait 00:19:16.640 ************************************ 00:19:16.640 11:25:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:16.640 11:25:02 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:16.640 11:25:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:16.640 11:25:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.640 11:25:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.640 ************************************ 00:19:16.640 START TEST nvmf_queue_depth 00:19:16.640 ************************************ 00:19:16.640 11:25:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:16.898 * Looking for test storage... 00:19:16.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.898 11:25:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.168 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:22.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:22.169 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:22.169 Found net devices under 0000:86:00.0: cvl_0_0 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:22.169 Found net devices under 0000:86:00.1: cvl_0_1 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:19:22.169 00:19:22.169 --- 10.0.0.2 ping statistics --- 00:19:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.169 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:19:22.169 00:19:22.169 --- 10.0.0.1 ping statistics --- 00:19:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.169 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=918696 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 918696 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 918696 ']' 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.169 11:25:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:22.169 [2024-07-12 11:25:08.470439] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:22.169 [2024-07-12 11:25:08.470529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.429 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.429 [2024-07-12 11:25:08.580836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.688 [2024-07-12 11:25:08.792189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.688 [2024-07-12 11:25:08.792227] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.688 [2024-07-12 11:25:08.792239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.688 [2024-07-12 11:25:08.792251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.688 [2024-07-12 11:25:08.792261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.688 [2024-07-12 11:25:08.792293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:22.946 [2024-07-12 11:25:09.275512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.946 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:23.204 Malloc0 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:23.204 [2024-07-12 11:25:09.399181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=918857 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 918857 /var/tmp/bdevperf.sock 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 918857 ']' 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.204 11:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:23.204 [2024-07-12 11:25:09.474129] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:23.204 [2024-07-12 11:25:09.474219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid918857 ] 00:19:23.204 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.463 [2024-07-12 11:25:09.577000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.463 [2024-07-12 11:25:09.788735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.031 11:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.031 11:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:19:24.031 11:25:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:24.031 11:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.031 11:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:24.289 NVMe0n1 00:19:24.289 11:25:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.289 11:25:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.289 Running I/O for 10 seconds... 00:19:34.304 00:19:34.304 Latency(us) 00:19:34.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.304 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:34.304 Verification LBA range: start 0x0 length 0x4000 00:19:34.304 NVMe0n1 : 10.06 10712.47 41.85 0.00 0.00 95193.34 18236.10 59723.24 00:19:34.304 =================================================================================================================== 00:19:34.304 Total : 10712.47 41.85 0.00 0.00 95193.34 18236.10 59723.24 00:19:34.563 0 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 918857 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 918857 ']' 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 918857 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 918857 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 918857' 00:19:34.563 killing process with pid 918857 00:19:34.563 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 918857 00:19:34.563 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.563 00:19:34.564 Latency(us) 00:19:34.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.564 =================================================================================================================== 00:19:34.564 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.564 11:25:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 918857 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:35.500 rmmod nvme_tcp 00:19:35.500 rmmod nvme_fabrics 00:19:35.500 rmmod nvme_keyring 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 918696 ']' 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 918696 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 918696 ']' 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 918696 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.500 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 918696 00:19:35.759 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:35.759 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:35.759 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 918696' 00:19:35.759 killing process with pid 918696 00:19:35.759 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 918696 00:19:35.759 11:25:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 918696 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.134 11:25:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.039 11:25:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.039 00:19:39.039 real 0m22.443s 00:19:39.039 user 0m27.805s 00:19:39.039 sys 0m5.762s 00:19:39.039 11:25:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.039 11:25:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:39.039 ************************************ 00:19:39.039 END TEST nvmf_queue_depth 00:19:39.039 ************************************ 00:19:39.325 11:25:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:39.325 11:25:25 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:39.325 11:25:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:39.325 11:25:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.325 11:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:39.325 ************************************ 00:19:39.325 START TEST nvmf_target_multipath 00:19:39.325 ************************************ 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:39.325 * Looking for test storage... 00:19:39.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.325 11:25:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:44.597 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:44.597 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:44.597 Found net devices under 0000:86:00.0: cvl_0_0 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:44.597 Found net devices under 0000:86:00.1: cvl_0_1 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:44.597 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:44.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:19:44.598 00:19:44.598 --- 10.0.0.2 ping statistics --- 00:19:44.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.598 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:44.598 00:19:44.598 --- 10.0.0.1 ping statistics --- 00:19:44.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.598 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:44.598 only one NIC for nvmf test 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.598 rmmod nvme_tcp 00:19:44.598 rmmod nvme_fabrics 00:19:44.598 rmmod nvme_keyring 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.598 11:25:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.131 00:19:47.131 real 0m7.524s 00:19:47.131 user 0m1.446s 00:19:47.131 sys 0m4.032s 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.131 11:25:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:47.131 ************************************ 00:19:47.131 END TEST nvmf_target_multipath 00:19:47.131 ************************************ 00:19:47.131 11:25:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:47.131 11:25:32 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:47.131 11:25:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:47.131 11:25:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.131 11:25:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:47.131 ************************************ 00:19:47.131 START TEST nvmf_zcopy 00:19:47.131 ************************************ 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:47.131 * Looking for test storage... 00:19:47.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.131 11:25:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.398 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.399 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.399 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:52.399 00:19:52.399 --- 10.0.0.2 ping statistics --- 00:19:52.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.399 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:19:52.399 00:19:52.399 --- 10.0.0.1 ping statistics --- 00:19:52.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.399 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=927806 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 927806 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 927806 ']' 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.399 11:25:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:52.399 [2024-07-12 11:25:38.645204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:52.399 [2024-07-12 11:25:38.645295] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.399 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.658 [2024-07-12 11:25:38.755193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.658 [2024-07-12 11:25:38.975722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.658 [2024-07-12 11:25:38.975767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.658 [2024-07-12 11:25:38.975779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.658 [2024-07-12 11:25:38.975789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.658 [2024-07-12 11:25:38.975798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.658 [2024-07-12 11:25:38.975823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 [2024-07-12 11:25:39.456913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 [2024-07-12 11:25:39.473065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 malloc0 00:19:53.227 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.228 { 00:19:53.228 "params": { 00:19:53.228 "name": "Nvme$subsystem", 00:19:53.228 "trtype": "$TEST_TRANSPORT", 00:19:53.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.228 "adrfam": "ipv4", 00:19:53.228 "trsvcid": "$NVMF_PORT", 00:19:53.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.228 "hdgst": ${hdgst:-false}, 00:19:53.228 "ddgst": ${ddgst:-false} 00:19:53.228 }, 00:19:53.228 "method": "bdev_nvme_attach_controller" 00:19:53.228 } 00:19:53.228 EOF 00:19:53.228 )") 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:53.228 11:25:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:53.228 "params": { 00:19:53.228 "name": "Nvme1", 00:19:53.228 "trtype": "tcp", 00:19:53.228 "traddr": "10.0.0.2", 00:19:53.228 "adrfam": "ipv4", 00:19:53.228 "trsvcid": "4420", 00:19:53.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.228 "hdgst": false, 00:19:53.228 "ddgst": false 00:19:53.228 }, 00:19:53.228 "method": "bdev_nvme_attach_controller" 00:19:53.228 }' 00:19:53.486 [2024-07-12 11:25:39.627197] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:53.486 [2024-07-12 11:25:39.627286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928053 ] 00:19:53.486 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.486 [2024-07-12 11:25:39.728466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.744 [2024-07-12 11:25:39.949456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.311 Running I/O for 10 seconds... 00:20:04.287 00:20:04.287 Latency(us) 00:20:04.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.287 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:04.287 Verification LBA range: start 0x0 length 0x1000 00:20:04.287 Nvme1n1 : 10.01 7430.12 58.05 0.00 0.00 17178.40 566.32 25530.55 00:20:04.287 =================================================================================================================== 00:20:04.287 Total : 7430.12 58.05 0.00 0.00 17178.40 566.32 25530.55 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=929905 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.225 { 00:20:05.225 "params": { 00:20:05.225 "name": "Nvme$subsystem", 00:20:05.225 "trtype": "$TEST_TRANSPORT", 00:20:05.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.225 "adrfam": "ipv4", 00:20:05.225 "trsvcid": "$NVMF_PORT", 00:20:05.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.225 "hdgst": ${hdgst:-false}, 00:20:05.225 "ddgst": ${ddgst:-false} 00:20:05.225 }, 00:20:05.225 "method": "bdev_nvme_attach_controller" 00:20:05.225 } 00:20:05.225 EOF 00:20:05.225 )") 00:20:05.225 [2024-07-12 11:25:51.490956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.490996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:20:05.225 11:25:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:05.225 "params": { 00:20:05.225 "name": "Nvme1", 00:20:05.225 "trtype": "tcp", 00:20:05.225 "traddr": "10.0.0.2", 00:20:05.225 "adrfam": "ipv4", 00:20:05.225 "trsvcid": "4420", 00:20:05.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.225 "hdgst": false, 00:20:05.225 "ddgst": false 00:20:05.225 }, 00:20:05.225 "method": "bdev_nvme_attach_controller" 00:20:05.225 }' 00:20:05.225 [2024-07-12 11:25:51.502923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.502951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 [2024-07-12 11:25:51.510951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.510976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 [2024-07-12 11:25:51.518962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.518985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 [2024-07-12 11:25:51.530985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.531007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 [2024-07-12 11:25:51.543031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.543053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 [2024-07-12 11:25:51.552797] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:05.225 [2024-07-12 11:25:51.552875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929905 ] 00:20:05.225 [2024-07-12 11:25:51.555057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.555079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 [2024-07-12 11:25:51.567081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.567103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.225 [2024-07-12 11:25:51.579134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.225 [2024-07-12 11:25:51.579159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.591150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.591172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.603198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.603219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.485 [2024-07-12 11:25:51.615229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.615251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.627244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.627265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.639291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.639311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.651323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.651344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.656715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.485 [2024-07-12 11:25:51.663349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.663369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.675399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.675420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.687421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.687442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.699466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.699487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.711493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.711514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.723513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.723532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.735562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.735582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.747588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.747609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.759628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.759648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.771663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.771688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.783678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.783699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.795722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.795743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.807753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.807772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.819776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.819795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.485 [2024-07-12 11:25:51.831821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.485 [2024-07-12 11:25:51.831841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.843851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.843872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.855866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.855886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.867919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.867939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.879938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.879958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.881328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.745 [2024-07-12 11:25:51.891985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.892005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.904036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.904061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.916043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.916063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.928084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.928103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.940120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.940139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.952143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.952163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.964184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.964203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.976211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.976233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:51.988262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:51.988294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.000286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.000306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.012302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.012322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.024345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.024365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.036382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.036402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.048422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.048442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.060449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.060468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.072472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.072491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.084517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.084539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.745 [2024-07-12 11:25:52.096545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.745 [2024-07-12 11:25:52.096565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.108565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.108586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.120609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.120628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.132640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.132660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.144668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.144689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.156707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.156726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.168731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.168751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.180769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.180788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.192816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.192837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.204824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.204844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.216869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.216891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.228904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.228923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.240925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.240944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.252976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.252996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.265071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.265095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.277133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.277154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.289179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.289202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.301196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.301217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.313237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.313257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.325294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.325317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 Running I/O for 5 seconds... 00:20:06.004 [2024-07-12 11:25:52.340320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.340345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.004 [2024-07-12 11:25:52.356479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.004 [2024-07-12 11:25:52.356504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.263 [2024-07-12 11:25:52.372682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.263 [2024-07-12 11:25:52.372707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.263 [2024-07-12 11:25:52.388864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.263 [2024-07-12 11:25:52.388890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.263 [2024-07-12 11:25:52.403080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.263 [2024-07-12 11:25:52.403104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.263 [2024-07-12 11:25:52.418784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.263 [2024-07-12 11:25:52.418809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.263 [2024-07-12 11:25:52.430704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.263 [2024-07-12 11:25:52.430729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.263 [2024-07-12 11:25:52.446416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.263 [2024-07-12 11:25:52.446440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.263 [2024-07-12 11:25:52.457846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.457871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.467749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.467777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.484472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.484499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.500731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.500756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.511577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.511602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.528325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.528350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.543781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.543806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.555826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.555850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.571227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.571252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.583500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.583523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.593500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.593524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.264 [2024-07-12 11:25:52.609147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.264 [2024-07-12 11:25:52.609171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.625525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.625550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.634090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.634114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.646305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.646329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.662089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.662114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.678818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.678843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.694876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.694900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.711112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.711137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.727618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.727643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.743657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.743682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.758454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.758478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.774571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.774595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.790840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.790865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.807024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.807048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.822999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.823023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.837564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.837588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.849677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.849703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.523 [2024-07-12 11:25:52.865781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.523 [2024-07-12 11:25:52.865807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.882153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.882180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.898321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.898347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.914576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.914602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.926692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.926718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.938078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.938104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.954483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.954507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.970823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.970848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:52.986988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:52.987013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.003360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.003393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.020029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.020057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.031044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.031069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.046783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.046811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.058037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.058061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.073544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.073569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.090293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.090318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.102529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.102555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.112486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.112511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.128998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.129023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.145029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.145054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.807 [2024-07-12 11:25:53.160101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.807 [2024-07-12 11:25:53.160127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.172561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.172586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.189417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.189442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.205667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.205691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.222568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.222593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.233894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.233918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.249505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.249530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.265799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.265823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.282139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.282163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.294213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.294238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.304070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.304094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.320475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.320499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.336798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.336823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.352936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.352960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.369575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.369599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.385818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.385843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.397555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.397580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.066 [2024-07-12 11:25:53.407434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.066 [2024-07-12 11:25:53.407458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.423820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.423848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.439848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.439871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.456272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.456296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.470791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.470815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.482662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.482687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.492575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.492599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.508835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.508860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.524861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.524885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.541332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.541357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.557732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.557757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.574282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.574307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.590777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.590802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.607515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.607540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.624127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.624152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.640563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.640596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.657325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.657349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.325 [2024-07-12 11:25:53.674325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.325 [2024-07-12 11:25:53.674349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.690873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.690898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.702888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.702912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.713039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.713063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.729476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.729501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.745661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.745686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.762554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.762579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.774249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.774273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.583 [2024-07-12 11:25:53.789463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.583 [2024-07-12 11:25:53.789487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.801582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.801607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.817721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.817746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.833945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.833969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.846026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.846050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.862536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.862566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.878904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.878930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.895012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.895038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.909433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.909456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.920921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.920945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.584 [2024-07-12 11:25:53.937047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.584 [2024-07-12 11:25:53.937072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:53.953827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:53.953851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:53.970209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:53.970233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:53.986721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:53.986745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:53.998885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:53.998910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.011284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.011310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.026924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.026948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.044224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.044249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.055954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.055978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.067679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.067704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.084122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.084146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.100207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.100233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.112258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.112282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.128021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.128046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.144444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.144473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.156500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.156524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.171568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.171594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.842 [2024-07-12 11:25:54.183945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.842 [2024-07-12 11:25:54.183970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.199392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.199417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.211293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.211318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.227871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.227896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.244068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.244093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.260511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.260537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.272317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.272342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.289118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.289145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.305206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.305232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.317347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.317373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.333678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.333704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.345436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.345461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.361060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.361085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.373575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.373600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.386006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.386033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.401544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.401568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.413337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.413367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.429959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.429984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.101 [2024-07-12 11:25:54.446091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.101 [2024-07-12 11:25:54.446116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.462856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.462880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.479552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.479577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.496170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.496194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.512632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.512657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.529025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.529051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.540920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.540946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.556728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.556754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.573454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.573480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.589950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.589975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.601040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.601065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.610966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.610990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.627242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.627267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.643554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.643578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.655709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.655733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.665676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.665701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.675385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.675426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.691347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.691383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.360 [2024-07-12 11:25:54.707885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.360 [2024-07-12 11:25:54.707910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.724530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.724555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.741190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.741215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.757359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.757392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.773511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.773536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.789698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.789723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.800768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.800792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.817425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.817449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.833777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.833801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.850704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.850729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.862492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.862516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.878398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.878422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.889867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.889892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.904932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.904957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.919840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.919873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.930680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.930705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.947029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.947054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.620 [2024-07-12 11:25:54.963340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.620 [2024-07-12 11:25:54.963365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:54.980010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:54.980035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:54.996022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:54.996046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.007774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.007798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.023501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.023525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.035334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.035361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.045136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.045160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.061224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.061248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.077676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.077700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.094059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.094083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.110325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.110350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.127149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.127174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.143134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.143159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.880 [2024-07-12 11:25:55.158468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.880 [2024-07-12 11:25:55.158492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.881 [2024-07-12 11:25:55.170551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.881 [2024-07-12 11:25:55.170576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.881 [2024-07-12 11:25:55.186410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.881 [2024-07-12 11:25:55.186435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.881 [2024-07-12 11:25:55.202582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.881 [2024-07-12 11:25:55.202606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.881 [2024-07-12 11:25:55.219147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.881 [2024-07-12 11:25:55.219171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.881 [2024-07-12 11:25:55.231185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.881 [2024-07-12 11:25:55.231209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.246809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.246833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.263534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.263557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.279724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.279748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.296271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.296295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.312455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.312480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.324266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.324291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.340220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.340243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.356656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.356681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.373698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.373723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.388821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.388846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.400142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.400166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.415588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.415613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.139 [2024-07-12 11:25:55.432175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.139 [2024-07-12 11:25:55.432199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.140 [2024-07-12 11:25:55.448687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.140 [2024-07-12 11:25:55.448711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.140 [2024-07-12 11:25:55.464216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.140 [2024-07-12 11:25:55.464241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.140 [2024-07-12 11:25:55.480882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.140 [2024-07-12 11:25:55.480907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.140 [2024-07-12 11:25:55.493245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.140 [2024-07-12 11:25:55.493270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.508849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.508874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.525587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.525611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.537626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.537651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.549223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.549246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.565323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.565347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.577008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.577032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.591872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.591896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.604201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.604226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.619839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.619864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.636298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.636324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.648283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.648307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.660580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.660604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.676441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.676465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.688784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.688809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.699917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.699942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.716036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.716063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.732493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.732519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.399 [2024-07-12 11:25:55.748981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.399 [2024-07-12 11:25:55.749008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.760156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.760181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.776946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.776972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.793287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.793312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.809703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.809727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.821280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.821304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.838222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.838247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.849223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.849247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.859204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.859229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.875768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.875792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.891244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.891269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.907777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.658 [2024-07-12 11:25:55.907805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.658 [2024-07-12 11:25:55.919710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.659 [2024-07-12 11:25:55.919735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.659 [2024-07-12 11:25:55.934921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.659 [2024-07-12 11:25:55.934945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.659 [2024-07-12 11:25:55.946485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.659 [2024-07-12 11:25:55.946509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.659 [2024-07-12 11:25:55.956602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.659 [2024-07-12 11:25:55.956626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.659 [2024-07-12 11:25:55.966545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.659 [2024-07-12 11:25:55.966569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.659 [2024-07-12 11:25:55.982870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.659 [2024-07-12 11:25:55.982894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.659 [2024-07-12 11:25:55.999454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.659 [2024-07-12 11:25:55.999479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.015886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.015910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.032296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.032321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.048353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.048384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.058494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.058517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.075265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.075293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.090920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.090945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.106603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.106628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.118595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.118618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.134517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.134541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.150957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.150981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.163848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.163871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.179928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.179952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.195153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.195176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.211462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.211486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.222984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.223007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.238545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.238569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.254637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.254660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.918 [2024-07-12 11:25:56.271109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.918 [2024-07-12 11:25:56.271133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.287733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.287758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.296574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.296597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.312872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.312895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.324616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.324650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.334194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.334217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.350362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.350397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.366747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.366770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.383048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.383072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.399204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.399227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.411186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.411210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.427465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.177 [2024-07-12 11:25:56.427488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.177 [2024-07-12 11:25:56.443202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.178 [2024-07-12 11:25:56.443225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.178 [2024-07-12 11:25:56.458190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.178 [2024-07-12 11:25:56.458214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.178 [2024-07-12 11:25:56.474762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.178 [2024-07-12 11:25:56.474785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.178 [2024-07-12 11:25:56.491044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.178 [2024-07-12 11:25:56.491068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.178 [2024-07-12 11:25:56.503509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.178 [2024-07-12 11:25:56.503533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.178 [2024-07-12 11:25:56.515563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.178 [2024-07-12 11:25:56.515585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.178 [2024-07-12 11:25:56.531532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.178 [2024-07-12 11:25:56.531556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.548436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.548460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.564314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.564337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.575834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.575857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.592223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.592247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.608820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.608843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.620350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.620373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.636961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.636988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.649094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.649117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.659907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.659931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.676631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.676655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.693803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.693827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.704720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.704744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.720677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.720701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.731520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.731543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.747872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.747896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.764157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.764180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.775870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.775894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.437 [2024-07-12 11:25:56.792399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.437 [2024-07-12 11:25:56.792424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.808944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.808968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.825291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.825314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.837326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.837349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.854272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.854295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.870195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.870219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.880902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.880926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.897783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.897807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.913167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.913194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.925223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.925248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.940742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.940766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.952943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.952967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.968512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.968546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.984977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.985000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:56.996956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:56.996979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:57.013823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:57.013847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:57.030212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:57.030236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.696 [2024-07-12 11:25:57.046734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.696 [2024-07-12 11:25:57.046759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.063371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.063404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.075486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.075510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.090719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.090744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.107427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.107452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.119847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.119871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.132532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.132557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.144101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.144125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.160957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.160982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.176641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.176666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.192968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.192996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.209003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.209027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.221174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.221198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.236448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.236472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.248580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.248604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.264138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.264161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.280480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.280504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.296558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.296581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.956 [2024-07-12 11:25:57.308327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.956 [2024-07-12 11:25:57.308351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.324695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.324719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.341699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.341723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 00:20:11.215 Latency(us) 00:20:11.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.215 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:11.215 Nvme1n1 : 5.01 14213.20 111.04 0.00 0.00 8996.36 4046.14 15500.69 00:20:11.215 =================================================================================================================== 00:20:11.215 Total : 14213.20 111.04 0.00 0.00 8996.36 4046.14 15500.69 00:20:11.215 [2024-07-12 11:25:57.353389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.353427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.365409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.365431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.377431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.377452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.389451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.389471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.401534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.401564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.413535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.413561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.425566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.425587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.437611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.437632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.449615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.449636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.461656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.461676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.473692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.473712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.215 [2024-07-12 11:25:57.485712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.215 [2024-07-12 11:25:57.485731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.216 [2024-07-12 11:25:57.497762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.216 [2024-07-12 11:25:57.497782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.216 [2024-07-12 11:25:57.509788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.216 [2024-07-12 11:25:57.509807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.216 [2024-07-12 11:25:57.521830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.216 [2024-07-12 11:25:57.521849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.216 [2024-07-12 11:25:57.533863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.216 [2024-07-12 11:25:57.533882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.216 [2024-07-12 11:25:57.545883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.216 [2024-07-12 11:25:57.545902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.216 [2024-07-12 11:25:57.557927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.216 [2024-07-12 11:25:57.557946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.216 [2024-07-12 11:25:57.569962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.216 [2024-07-12 11:25:57.569981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.581991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.582010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.594041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.594060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.606046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.606064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.618096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.618115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.630128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.630146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.642147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.642166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.654194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.654212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.666252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.666279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.678267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.678290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.690297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.690316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.702317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.702337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.714363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.714388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.726434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.726458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.738451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.738476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.750486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.750506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.762513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.762535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.774540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.774559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.786566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.786585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.798588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.798607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.810632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.810651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.476 [2024-07-12 11:25:57.830686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.476 [2024-07-12 11:25:57.830705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.842709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.842728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.854754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.854773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.866793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.866812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.878824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.878844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.890853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.890872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.902881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.902900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.914933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.914953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.926964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.926983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.938982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.939001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.951029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.951049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.963062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.963083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.975081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.975100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.987125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.987144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:57.999147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:57.999165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:58.011191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:58.011210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:58.023234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:58.023253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:58.035283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:58.035303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:58.047301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:58.047320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:58.059330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:58.059350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:58.071349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:58.071369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.736 [2024-07-12 11:25:58.083400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.736 [2024-07-12 11:25:58.083419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.095423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.095445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.107457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.107476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.119487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.119506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.131516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.131536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.143576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.143595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.155592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.155610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.167612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.167632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.179673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.179691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.191685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.191704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.203722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.203742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.995 [2024-07-12 11:25:58.215759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.995 [2024-07-12 11:25:58.215778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.227781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.227800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.239826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.239845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.251861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.251880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.263877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.263895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.275919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.275937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.287944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.287963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.299989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.300008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.312026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.312045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.324050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.324072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.336086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.336104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.996 [2024-07-12 11:25:58.348120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.996 [2024-07-12 11:25:58.348139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 [2024-07-12 11:25:58.360140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.255 [2024-07-12 11:25:58.360159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 [2024-07-12 11:25:58.372186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.255 [2024-07-12 11:25:58.372205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 [2024-07-12 11:25:58.384206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.255 [2024-07-12 11:25:58.384225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 [2024-07-12 11:25:58.396254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.255 [2024-07-12 11:25:58.396273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 [2024-07-12 11:25:58.408288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.255 [2024-07-12 11:25:58.408306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 [2024-07-12 11:25:58.420308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.255 [2024-07-12 11:25:58.420327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 [2024-07-12 11:25:58.432358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.255 [2024-07-12 11:25:58.432384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (929905) - No such process 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 929905 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:12.255 delay0 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.255 11:25:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:12.255 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.255 [2024-07-12 11:25:58.593708] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:20.366 Initializing NVMe Controllers 00:20:20.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.366 Initialization complete. Launching workers. 00:20:20.366 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 19228 00:20:20.366 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19384, failed to submit 110 00:20:20.366 success 19271, unsuccess 113, failed 0 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.366 rmmod nvme_tcp 00:20:20.366 rmmod nvme_fabrics 00:20:20.366 rmmod nvme_keyring 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 927806 ']' 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 927806 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 927806 ']' 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 927806 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 927806 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 927806' 00:20:20.366 killing process with pid 927806 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 927806 00:20:20.366 11:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 927806 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.931 11:26:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.522 11:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:23.522 00:20:23.522 real 0m36.305s 00:20:23.522 user 0m50.542s 00:20:23.522 sys 0m11.246s 00:20:23.522 11:26:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.522 11:26:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:23.522 ************************************ 00:20:23.522 END TEST nvmf_zcopy 00:20:23.522 ************************************ 00:20:23.522 11:26:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.522 11:26:09 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:23.522 11:26:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.522 11:26:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.522 11:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.522 ************************************ 00:20:23.522 START TEST nvmf_nmic 00:20:23.522 ************************************ 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:23.522 * Looking for test storage... 00:20:23.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.522 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.523 11:26:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.793 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.794 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.794 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:20:28.794 00:20:28.794 --- 10.0.0.2 ping statistics --- 00:20:28.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.794 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:28.794 00:20:28.794 --- 10.0.0.1 ping statistics --- 00:20:28.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.794 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=935918 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 935918 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 935918 ']' 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:28.794 11:26:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.794 [2024-07-12 11:26:14.551500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:28.794 [2024-07-12 11:26:14.551587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.794 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.794 [2024-07-12 11:26:14.660555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.794 [2024-07-12 11:26:14.875221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.794 [2024-07-12 11:26:14.875264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.794 [2024-07-12 11:26:14.875275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.794 [2024-07-12 11:26:14.875284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.794 [2024-07-12 11:26:14.875293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.794 [2024-07-12 11:26:14.875414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.794 [2024-07-12 11:26:14.875471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.794 [2024-07-12 11:26:14.875644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.794 [2024-07-12 11:26:14.875653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.054 [2024-07-12 11:26:15.371617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.054 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 Malloc0 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 [2024-07-12 11:26:15.499471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:29.313 test case1: single bdev can't be used in multiple subsystems 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 [2024-07-12 11:26:15.523373] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:29.313 [2024-07-12 11:26:15.523412] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:29.313 [2024-07-12 11:26:15.523424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:29.313 request: 00:20:29.313 { 00:20:29.313 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.313 "namespace": { 00:20:29.313 "bdev_name": "Malloc0", 00:20:29.313 "no_auto_visible": false 00:20:29.313 }, 00:20:29.313 "method": "nvmf_subsystem_add_ns", 00:20:29.313 "req_id": 1 00:20:29.313 } 00:20:29.313 Got JSON-RPC error response 00:20:29.313 response: 00:20:29.313 { 00:20:29.313 "code": -32602, 00:20:29.313 "message": "Invalid parameters" 00:20:29.313 } 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:29.313 Adding namespace failed - expected result. 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:29.313 test case2: host connect to nvmf target in multiple paths 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:29.313 [2024-07-12 11:26:15.535522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.313 11:26:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:30.691 11:26:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:31.629 11:26:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:31.629 11:26:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:20:31.629 11:26:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:31.629 11:26:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:31.629 11:26:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:20:33.532 11:26:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:33.532 11:26:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:33.532 11:26:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:33.532 11:26:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:33.532 11:26:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:33.532 11:26:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:20:33.532 11:26:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:33.532 [global] 00:20:33.532 thread=1 00:20:33.532 invalidate=1 00:20:33.532 rw=write 00:20:33.532 time_based=1 00:20:33.532 runtime=1 00:20:33.532 ioengine=libaio 00:20:33.532 direct=1 00:20:33.532 bs=4096 00:20:33.532 iodepth=1 00:20:33.532 norandommap=0 00:20:33.532 numjobs=1 00:20:33.532 00:20:33.532 verify_dump=1 00:20:33.532 verify_backlog=512 00:20:33.532 verify_state_save=0 00:20:33.532 do_verify=1 00:20:33.532 verify=crc32c-intel 00:20:33.532 [job0] 00:20:33.532 filename=/dev/nvme0n1 00:20:33.532 Could not set queue depth (nvme0n1) 00:20:33.791 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:33.791 fio-3.35 00:20:33.791 Starting 1 thread 00:20:35.167 00:20:35.167 job0: (groupid=0, jobs=1): err= 0: pid=936983: Fri Jul 12 11:26:21 2024 00:20:35.167 read: IOPS=2054, BW=8220KiB/s (8417kB/s)(8228KiB/1001msec) 00:20:35.167 slat (nsec): min=6816, max=37524, avg=7724.24, stdev=1485.97 00:20:35.167 clat (usec): min=196, max=323, avg=241.85, stdev=25.37 00:20:35.167 lat (usec): min=204, max=331, avg=249.57, stdev=25.36 00:20:35.167 clat percentiles (usec): 00:20:35.167 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:20:35.167 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 233], 00:20:35.167 | 70.00th=[ 239], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 00:20:35.167 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 322], 00:20:35.167 | 99.99th=[ 326] 00:20:35.167 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:20:35.167 slat (nsec): min=10190, max=41160, avg=11396.53, stdev=1660.10 00:20:35.167 clat (usec): min=140, max=425, avg=173.16, stdev=27.45 00:20:35.167 lat (usec): min=151, max=467, avg=184.55, stdev=27.61 00:20:35.167 clat percentiles (usec): 00:20:35.167 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:20:35.167 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 165], 00:20:35.167 | 70.00th=[ 167], 80.00th=[ 180], 90.00th=[ 223], 95.00th=[ 229], 00:20:35.167 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 318], 99.95th=[ 322], 00:20:35.167 | 99.99th=[ 424] 00:20:35.167 bw ( KiB/s): min=11568, max=11568, per=100.00%, avg=11568.00, stdev= 0.00, samples=1 00:20:35.167 iops : min= 2892, max= 2892, avg=2892.00, stdev= 0.00, samples=1 00:20:35.167 lat (usec) : 250=88.07%, 500=11.93% 00:20:35.167 cpu : usr=4.00%, sys=7.10%, ctx=4617, majf=0, minf=2 00:20:35.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:35.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.167 issued rwts: total=2057,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:35.167 00:20:35.167 Run status group 0 (all jobs): 00:20:35.167 READ: bw=8220KiB/s (8417kB/s), 8220KiB/s-8220KiB/s (8417kB/s-8417kB/s), io=8228KiB (8425kB), run=1001-1001msec 00:20:35.167 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:20:35.167 00:20:35.167 Disk stats (read/write): 00:20:35.167 nvme0n1: ios=2077/2048, merge=0/0, ticks=483/336, in_queue=819, util=91.38% 00:20:35.167 11:26:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:35.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.735 rmmod nvme_tcp 00:20:35.735 rmmod nvme_fabrics 00:20:35.735 rmmod nvme_keyring 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 935918 ']' 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 935918 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 935918 ']' 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 935918 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935918 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935918' 00:20:35.735 killing process with pid 935918 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 935918 00:20:35.735 11:26:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 935918 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.639 11:26:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.545 11:26:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:39.545 00:20:39.545 real 0m16.162s 00:20:39.545 user 0m39.474s 00:20:39.545 sys 0m4.769s 00:20:39.545 11:26:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.545 11:26:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:39.545 ************************************ 00:20:39.545 END TEST nvmf_nmic 00:20:39.545 ************************************ 00:20:39.545 11:26:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:39.545 11:26:25 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:39.545 11:26:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:39.545 11:26:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.545 11:26:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.545 ************************************ 00:20:39.545 START TEST nvmf_fio_target 00:20:39.545 ************************************ 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:39.545 * Looking for test storage... 00:20:39.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.545 11:26:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.817 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:44.818 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:44.818 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:44.818 Found net devices under 0000:86:00.0: cvl_0_0 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:44.818 Found net devices under 0000:86:00.1: cvl_0_1 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:20:44.818 00:20:44.818 --- 10.0.0.2 ping statistics --- 00:20:44.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.818 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:20:44.818 00:20:44.818 --- 10.0.0.1 ping statistics --- 00:20:44.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.818 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=940761 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 940761 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 940761 ']' 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.818 11:26:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.818 [2024-07-12 11:26:30.944715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:44.818 [2024-07-12 11:26:30.944805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.818 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.818 [2024-07-12 11:26:31.055599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.076 [2024-07-12 11:26:31.273267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.076 [2024-07-12 11:26:31.273310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.076 [2024-07-12 11:26:31.273322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.076 [2024-07-12 11:26:31.273330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.076 [2024-07-12 11:26:31.273339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.076 [2024-07-12 11:26:31.273435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.076 [2024-07-12 11:26:31.273511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.076 [2024-07-12 11:26:31.273524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.076 [2024-07-12 11:26:31.273535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:45.642 [2024-07-12 11:26:31.928058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.642 11:26:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.901 11:26:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:45.901 11:26:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:46.159 11:26:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:46.159 11:26:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:46.417 11:26:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:46.417 11:26:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:46.675 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:46.675 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:46.933 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.192 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:47.192 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.451 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:47.451 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:47.709 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:47.709 11:26:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:47.967 11:26:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:48.226 11:26:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:48.226 11:26:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.226 11:26:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:48.226 11:26:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:48.484 11:26:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.743 [2024-07-12 11:26:34.849067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.743 11:26:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:48.743 11:26:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:49.002 11:26:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:50.374 11:26:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:50.374 11:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:20:50.374 11:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:50.374 11:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:20:50.374 11:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:20:50.374 11:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:20:52.279 11:26:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:52.279 11:26:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:52.279 11:26:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:52.279 11:26:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:20:52.279 11:26:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:52.279 11:26:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:20:52.279 11:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:52.279 [global] 00:20:52.279 thread=1 00:20:52.279 invalidate=1 00:20:52.279 rw=write 00:20:52.279 time_based=1 00:20:52.279 runtime=1 00:20:52.279 ioengine=libaio 00:20:52.279 direct=1 00:20:52.279 bs=4096 00:20:52.279 iodepth=1 00:20:52.279 norandommap=0 00:20:52.279 numjobs=1 00:20:52.279 00:20:52.279 verify_dump=1 00:20:52.279 verify_backlog=512 00:20:52.279 verify_state_save=0 00:20:52.279 do_verify=1 00:20:52.279 verify=crc32c-intel 00:20:52.280 [job0] 00:20:52.280 filename=/dev/nvme0n1 00:20:52.280 [job1] 00:20:52.280 filename=/dev/nvme0n2 00:20:52.280 [job2] 00:20:52.280 filename=/dev/nvme0n3 00:20:52.280 [job3] 00:20:52.280 filename=/dev/nvme0n4 00:20:52.280 Could not set queue depth (nvme0n1) 00:20:52.280 Could not set queue depth (nvme0n2) 00:20:52.280 Could not set queue depth (nvme0n3) 00:20:52.280 Could not set queue depth (nvme0n4) 00:20:52.538 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.538 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.538 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.538 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.538 fio-3.35 00:20:52.538 Starting 4 threads 00:20:53.916 00:20:53.916 job0: (groupid=0, jobs=1): err= 0: pid=942328: Fri Jul 12 11:26:40 2024 00:20:53.916 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:20:53.916 slat (nsec): min=10834, max=22326, avg=20418.73, stdev=2892.74 00:20:53.916 clat (usec): min=40595, max=41043, avg=40951.81, stdev=88.80 00:20:53.916 lat (usec): min=40606, max=41065, avg=40972.23, stdev=90.40 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:53.916 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:53.916 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:53.916 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:53.916 | 99.99th=[41157] 00:20:53.916 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:20:53.916 slat (nsec): min=10982, max=39372, avg=13779.24, stdev=2288.03 00:20:53.916 clat (usec): min=164, max=321, avg=223.05, stdev=26.56 00:20:53.916 lat (usec): min=175, max=333, avg=236.83, stdev=26.77 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:20:53.916 | 30.00th=[ 212], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 237], 00:20:53.916 | 70.00th=[ 239], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 249], 00:20:53.916 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 322], 00:20:53.916 | 99.99th=[ 322] 00:20:53.916 bw ( KiB/s): min= 4096, max= 4096, per=19.20%, avg=4096.00, stdev= 0.00, samples=1 00:20:53.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:53.916 lat (usec) : 250=91.95%, 500=3.93% 00:20:53.916 lat (msec) : 50=4.12% 00:20:53.916 cpu : usr=0.39%, sys=1.07%, ctx=535, majf=0, minf=1 00:20:53.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.916 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.916 job1: (groupid=0, jobs=1): err= 0: pid=942329: Fri Jul 12 11:26:40 2024 00:20:53.916 read: IOPS=28, BW=116KiB/s (118kB/s)(116KiB/1004msec) 00:20:53.916 slat (nsec): min=8075, max=23804, avg=18199.03, stdev=5824.85 00:20:53.916 clat (usec): min=337, max=41968, avg=29804.12, stdev=18475.02 00:20:53.916 lat (usec): min=358, max=41991, avg=29822.32, stdev=18474.64 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[ 338], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 400], 00:20:53.916 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:20:53.916 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:20:53.916 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:53.916 | 99.99th=[42206] 00:20:53.916 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:20:53.916 slat (nsec): min=10701, max=39284, avg=12214.24, stdev=2287.89 00:20:53.916 clat (usec): min=176, max=767, avg=255.47, stdev=57.31 00:20:53.916 lat (usec): min=188, max=778, avg=267.68, stdev=57.38 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 219], 00:20:53.916 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 265], 00:20:53.916 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:20:53.916 | 99.00th=[ 510], 99.50th=[ 668], 99.90th=[ 766], 99.95th=[ 766], 00:20:53.916 | 99.99th=[ 766] 00:20:53.916 bw ( KiB/s): min= 4096, max= 4096, per=19.20%, avg=4096.00, stdev= 0.00, samples=1 00:20:53.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:53.916 lat (usec) : 250=46.03%, 500=48.80%, 750=1.11%, 1000=0.18% 00:20:53.916 lat (msec) : 50=3.88% 00:20:53.916 cpu : usr=0.70%, sys=0.70%, ctx=542, majf=0, minf=2 00:20:53.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.916 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.916 job2: (groupid=0, jobs=1): err= 0: pid=942330: Fri Jul 12 11:26:40 2024 00:20:53.916 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:20:53.916 slat (nsec): min=2210, max=34484, avg=6139.83, stdev=2963.50 00:20:53.916 clat (usec): min=196, max=483, avg=252.09, stdev=24.97 00:20:53.916 lat (usec): min=198, max=491, avg=258.23, stdev=25.95 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:20:53.916 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:20:53.916 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:20:53.916 | 99.00th=[ 322], 99.50th=[ 396], 99.90th=[ 474], 99.95th=[ 478], 00:20:53.916 | 99.99th=[ 486] 00:20:53.916 write: IOPS=2202, BW=8811KiB/s (9023kB/s)(8820KiB/1001msec); 0 zone resets 00:20:53.916 slat (nsec): min=3320, max=29256, avg=8756.93, stdev=4216.08 00:20:53.916 clat (usec): min=121, max=728, avg=200.51, stdev=50.32 00:20:53.916 lat (usec): min=125, max=735, avg=209.27, stdev=50.10 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 157], 20.00th=[ 167], 00:20:53.916 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:20:53.916 | 70.00th=[ 206], 80.00th=[ 233], 90.00th=[ 277], 95.00th=[ 289], 00:20:53.916 | 99.00th=[ 326], 99.50th=[ 367], 99.90th=[ 676], 99.95th=[ 701], 00:20:53.916 | 99.99th=[ 725] 00:20:53.916 bw ( KiB/s): min= 8936, max= 8936, per=41.88%, avg=8936.00, stdev= 0.00, samples=1 00:20:53.916 iops : min= 2234, max= 2234, avg=2234.00, stdev= 0.00, samples=1 00:20:53.916 lat (usec) : 250=67.08%, 500=32.75%, 750=0.16% 00:20:53.916 cpu : usr=2.90%, sys=4.80%, ctx=4255, majf=0, minf=1 00:20:53.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.916 issued rwts: total=2048,2205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.916 job3: (groupid=0, jobs=1): err= 0: pid=942331: Fri Jul 12 11:26:40 2024 00:20:53.916 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:20:53.916 slat (nsec): min=4174, max=38860, avg=8445.56, stdev=1715.59 00:20:53.916 clat (usec): min=195, max=567, avg=245.18, stdev=30.30 00:20:53.916 lat (usec): min=202, max=571, avg=253.62, stdev=30.37 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:20:53.916 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:20:53.916 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 269], 00:20:53.916 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 490], 99.95th=[ 490], 00:20:53.916 | 99.99th=[ 570] 00:20:53.916 write: IOPS=2235, BW=8943KiB/s (9158kB/s)(8952KiB/1001msec); 0 zone resets 00:20:53.916 slat (usec): min=5, max=1558, avg=12.30, stdev=32.79 00:20:53.916 clat (usec): min=147, max=423, avg=196.84, stdev=34.78 00:20:53.916 lat (usec): min=159, max=1777, avg=209.14, stdev=47.14 00:20:53.916 clat percentiles (usec): 00:20:53.916 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:20:53.916 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:20:53.917 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 265], 95.00th=[ 285], 00:20:53.917 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 400], 99.95th=[ 408], 00:20:53.917 | 99.99th=[ 424] 00:20:53.917 bw ( KiB/s): min= 8528, max= 8528, per=39.97%, avg=8528.00, stdev= 0.00, samples=1 00:20:53.917 iops : min= 2132, max= 2132, avg=2132.00, stdev= 0.00, samples=1 00:20:53.917 lat (usec) : 250=80.38%, 500=19.60%, 750=0.02% 00:20:53.917 cpu : usr=2.70%, sys=6.10%, ctx=4288, majf=0, minf=1 00:20:53.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.917 issued rwts: total=2048,2238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.917 00:20:53.917 Run status group 0 (all jobs): 00:20:53.917 READ: bw=15.8MiB/s (16.6MB/s), 85.9KiB/s-8184KiB/s (87.9kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1025msec 00:20:53.917 WRITE: bw=20.8MiB/s (21.8MB/s), 1998KiB/s-8943KiB/s (2046kB/s-9158kB/s), io=21.4MiB (22.4MB), run=1001-1025msec 00:20:53.917 00:20:53.917 Disk stats (read/write): 00:20:53.917 nvme0n1: ios=59/512, merge=0/0, ticks=811/103, in_queue=914, util=87.07% 00:20:53.917 nvme0n2: ios=49/512, merge=0/0, ticks=1606/123, in_queue=1729, util=89.94% 00:20:53.917 nvme0n3: ios=1711/2048, merge=0/0, ticks=524/403, in_queue=927, util=93.56% 00:20:53.917 nvme0n4: ios=1694/2048, merge=0/0, ticks=504/389, in_queue=893, util=95.29% 00:20:53.917 11:26:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:53.917 [global] 00:20:53.917 thread=1 00:20:53.917 invalidate=1 00:20:53.917 rw=randwrite 00:20:53.917 time_based=1 00:20:53.917 runtime=1 00:20:53.917 ioengine=libaio 00:20:53.917 direct=1 00:20:53.917 bs=4096 00:20:53.917 iodepth=1 00:20:53.917 norandommap=0 00:20:53.917 numjobs=1 00:20:53.917 00:20:53.917 verify_dump=1 00:20:53.917 verify_backlog=512 00:20:53.917 verify_state_save=0 00:20:53.917 do_verify=1 00:20:53.917 verify=crc32c-intel 00:20:53.917 [job0] 00:20:53.917 filename=/dev/nvme0n1 00:20:53.917 [job1] 00:20:53.917 filename=/dev/nvme0n2 00:20:53.917 [job2] 00:20:53.917 filename=/dev/nvme0n3 00:20:53.917 [job3] 00:20:53.917 filename=/dev/nvme0n4 00:20:53.917 Could not set queue depth (nvme0n1) 00:20:53.917 Could not set queue depth (nvme0n2) 00:20:53.917 Could not set queue depth (nvme0n3) 00:20:53.917 Could not set queue depth (nvme0n4) 00:20:54.176 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.176 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.176 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.176 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:54.176 fio-3.35 00:20:54.176 Starting 4 threads 00:20:55.553 00:20:55.553 job0: (groupid=0, jobs=1): err= 0: pid=942701: Fri Jul 12 11:26:41 2024 00:20:55.553 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:20:55.553 slat (nsec): min=9998, max=25034, avg=21000.23, stdev=3460.79 00:20:55.553 clat (usec): min=40658, max=41978, avg=41045.60, stdev=303.58 00:20:55.553 lat (usec): min=40668, max=42001, avg=41066.60, stdev=304.69 00:20:55.553 clat percentiles (usec): 00:20:55.553 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:55.553 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:55.553 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:20:55.553 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:55.553 | 99.99th=[42206] 00:20:55.553 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:20:55.553 slat (nsec): min=6182, max=35613, avg=8235.93, stdev=2704.21 00:20:55.553 clat (usec): min=190, max=416, avg=234.61, stdev=20.21 00:20:55.553 lat (usec): min=198, max=451, avg=242.85, stdev=20.74 00:20:55.553 clat percentiles (usec): 00:20:55.553 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:20:55.553 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:20:55.553 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:20:55.553 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 416], 99.95th=[ 416], 00:20:55.553 | 99.99th=[ 416] 00:20:55.553 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:20:55.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:55.553 lat (usec) : 250=80.52%, 500=15.36% 00:20:55.553 lat (msec) : 50=4.12% 00:20:55.554 cpu : usr=0.58%, sys=0.39%, ctx=534, majf=0, minf=1 00:20:55.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.554 job1: (groupid=0, jobs=1): err= 0: pid=942702: Fri Jul 12 11:26:41 2024 00:20:55.554 read: IOPS=2195, BW=8783KiB/s (8994kB/s)(8792KiB/1001msec) 00:20:55.554 slat (nsec): min=7007, max=25792, avg=7946.31, stdev=1194.85 00:20:55.554 clat (usec): min=189, max=1628, avg=234.47, stdev=35.22 00:20:55.554 lat (usec): min=197, max=1636, avg=242.42, stdev=35.27 00:20:55.554 clat percentiles (usec): 00:20:55.554 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:20:55.554 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:20:55.554 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 265], 00:20:55.554 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 334], 00:20:55.554 | 99.99th=[ 1631] 00:20:55.554 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:20:55.554 slat (nsec): min=10260, max=40458, avg=11499.78, stdev=1681.59 00:20:55.554 clat (usec): min=121, max=591, avg=165.38, stdev=29.35 00:20:55.554 lat (usec): min=132, max=602, avg=176.88, stdev=29.52 00:20:55.554 clat percentiles (usec): 00:20:55.554 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:20:55.554 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 167], 00:20:55.554 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 200], 95.00th=[ 208], 00:20:55.554 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 529], 99.95th=[ 545], 00:20:55.554 | 99.99th=[ 594] 00:20:55.554 bw ( KiB/s): min=11464, max=11464, per=72.00%, avg=11464.00, stdev= 0.00, samples=1 00:20:55.554 iops : min= 2866, max= 2866, avg=2866.00, stdev= 0.00, samples=1 00:20:55.554 lat (usec) : 250=90.52%, 500=9.39%, 750=0.06% 00:20:55.554 lat (msec) : 2=0.02% 00:20:55.554 cpu : usr=4.50%, sys=6.80%, ctx=4762, majf=0, minf=1 00:20:55.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 issued rwts: total=2198,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.554 job2: (groupid=0, jobs=1): err= 0: pid=942703: Fri Jul 12 11:26:41 2024 00:20:55.554 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:20:55.554 slat (nsec): min=9449, max=24541, avg=21778.50, stdev=2852.59 00:20:55.554 clat (usec): min=40591, max=41051, avg=40948.43, stdev=105.78 00:20:55.554 lat (usec): min=40601, max=41073, avg=40970.21, stdev=107.79 00:20:55.554 clat percentiles (usec): 00:20:55.554 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:20:55.554 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:55.554 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:55.554 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:55.554 | 99.99th=[41157] 00:20:55.554 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:20:55.554 slat (nsec): min=10624, max=44000, avg=11954.60, stdev=2109.50 00:20:55.554 clat (usec): min=162, max=748, avg=197.06, stdev=39.39 00:20:55.554 lat (usec): min=174, max=760, avg=209.02, stdev=39.61 00:20:55.554 clat percentiles (usec): 00:20:55.554 | 1.00th=[ 169], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:20:55.554 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:20:55.554 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 233], 00:20:55.554 | 99.00th=[ 297], 99.50th=[ 502], 99.90th=[ 750], 99.95th=[ 750], 00:20:55.554 | 99.99th=[ 750] 00:20:55.554 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:20:55.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:55.554 lat (usec) : 250=92.88%, 500=2.43%, 750=0.56% 00:20:55.554 lat (msec) : 50=4.12% 00:20:55.554 cpu : usr=0.69%, sys=0.59%, ctx=535, majf=0, minf=2 00:20:55.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.554 job3: (groupid=0, jobs=1): err= 0: pid=942705: Fri Jul 12 11:26:41 2024 00:20:55.554 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:20:55.554 slat (nsec): min=9514, max=25473, avg=21564.36, stdev=2893.53 00:20:55.554 clat (usec): min=40809, max=41084, avg=40968.07, stdev=57.45 00:20:55.554 lat (usec): min=40831, max=41107, avg=40989.63, stdev=56.79 00:20:55.554 clat percentiles (usec): 00:20:55.554 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:20:55.554 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:55.554 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:55.554 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:55.554 | 99.99th=[41157] 00:20:55.554 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:20:55.554 slat (nsec): min=10352, max=39623, avg=11754.44, stdev=1878.02 00:20:55.554 clat (usec): min=190, max=323, avg=229.80, stdev=17.22 00:20:55.554 lat (usec): min=202, max=348, avg=241.56, stdev=17.44 00:20:55.554 clat percentiles (usec): 00:20:55.554 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:20:55.554 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:20:55.554 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 258], 00:20:55.554 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 326], 00:20:55.554 | 99.99th=[ 326] 00:20:55.554 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:20:55.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:55.554 lat (usec) : 250=87.45%, 500=8.43% 00:20:55.554 lat (msec) : 50=4.12% 00:20:55.554 cpu : usr=0.29%, sys=1.07%, ctx=534, majf=0, minf=1 00:20:55.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.554 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.554 00:20:55.554 Run status group 0 (all jobs): 00:20:55.554 READ: bw=8801KiB/s (9012kB/s), 85.5KiB/s-8783KiB/s (87.6kB/s-8994kB/s), io=9056KiB (9273kB), run=1001-1029msec 00:20:55.554 WRITE: bw=15.5MiB/s (16.3MB/s), 1990KiB/s-9.99MiB/s (2038kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1029msec 00:20:55.554 00:20:55.554 Disk stats (read/write): 00:20:55.554 nvme0n1: ios=67/512, merge=0/0, ticks=725/113, in_queue=838, util=87.27% 00:20:55.554 nvme0n2: ios=2009/2048, merge=0/0, ticks=667/320, in_queue=987, util=98.78% 00:20:55.554 nvme0n3: ios=60/512, merge=0/0, ticks=1371/97, in_queue=1468, util=99.27% 00:20:55.554 nvme0n4: ios=61/512, merge=0/0, ticks=836/113, in_queue=949, util=99.69% 00:20:55.554 11:26:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:55.554 [global] 00:20:55.554 thread=1 00:20:55.554 invalidate=1 00:20:55.554 rw=write 00:20:55.554 time_based=1 00:20:55.555 runtime=1 00:20:55.555 ioengine=libaio 00:20:55.555 direct=1 00:20:55.555 bs=4096 00:20:55.555 iodepth=128 00:20:55.555 norandommap=0 00:20:55.555 numjobs=1 00:20:55.555 00:20:55.555 verify_dump=1 00:20:55.555 verify_backlog=512 00:20:55.555 verify_state_save=0 00:20:55.555 do_verify=1 00:20:55.555 verify=crc32c-intel 00:20:55.555 [job0] 00:20:55.555 filename=/dev/nvme0n1 00:20:55.555 [job1] 00:20:55.555 filename=/dev/nvme0n2 00:20:55.555 [job2] 00:20:55.555 filename=/dev/nvme0n3 00:20:55.555 [job3] 00:20:55.555 filename=/dev/nvme0n4 00:20:55.555 Could not set queue depth (nvme0n1) 00:20:55.555 Could not set queue depth (nvme0n2) 00:20:55.555 Could not set queue depth (nvme0n3) 00:20:55.555 Could not set queue depth (nvme0n4) 00:20:55.814 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:55.814 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:55.814 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:55.814 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:55.814 fio-3.35 00:20:55.814 Starting 4 threads 00:20:56.821 00:20:56.821 job0: (groupid=0, jobs=1): err= 0: pid=943076: Fri Jul 12 11:26:43 2024 00:20:56.821 read: IOPS=4426, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1007msec) 00:20:56.821 slat (nsec): min=1346, max=11120k, avg=107026.76, stdev=626499.33 00:20:56.821 clat (usec): min=5886, max=38366, avg=13406.83, stdev=4176.65 00:20:56.821 lat (usec): min=6503, max=38393, avg=13513.85, stdev=4218.80 00:20:56.821 clat percentiles (usec): 00:20:56.821 | 1.00th=[ 7635], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10683], 00:20:56.821 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:20:56.821 | 70.00th=[13829], 80.00th=[15533], 90.00th=[19006], 95.00th=[21365], 00:20:56.821 | 99.00th=[29492], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:20:56.821 | 99.99th=[38536] 00:20:56.821 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:20:56.821 slat (usec): min=2, max=44104, avg=107.74, stdev=839.95 00:20:56.821 clat (usec): min=6606, max=50725, avg=13470.89, stdev=4941.39 00:20:56.821 lat (usec): min=6618, max=50765, avg=13578.63, stdev=5003.00 00:20:56.821 clat percentiles (usec): 00:20:56.821 | 1.00th=[ 7767], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:20:56.821 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:20:56.822 | 70.00th=[13173], 80.00th=[13960], 90.00th=[17433], 95.00th=[21103], 00:20:56.822 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:20:56.822 | 99.99th=[50594] 00:20:56.822 bw ( KiB/s): min=18184, max=18680, per=27.15%, avg=18432.00, stdev=350.72, samples=2 00:20:56.822 iops : min= 4546, max= 4670, avg=4608.00, stdev=87.68, samples=2 00:20:56.822 lat (msec) : 10=8.10%, 20=84.77%, 50=7.13%, 100=0.01% 00:20:56.822 cpu : usr=4.27%, sys=4.57%, ctx=501, majf=0, minf=1 00:20:56.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:56.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.822 issued rwts: total=4457,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.822 job1: (groupid=0, jobs=1): err= 0: pid=943077: Fri Jul 12 11:26:43 2024 00:20:56.822 read: IOPS=3897, BW=15.2MiB/s (16.0MB/s)(16.0MiB/1049msec) 00:20:56.822 slat (nsec): min=1332, max=16567k, avg=118359.06, stdev=863077.11 00:20:56.822 clat (usec): min=4648, max=54433, avg=15268.16, stdev=8234.82 00:20:56.822 lat (usec): min=4654, max=58806, avg=15386.52, stdev=8278.02 00:20:56.822 clat percentiles (usec): 00:20:56.822 | 1.00th=[ 5866], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11469], 00:20:56.822 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13829], 00:20:56.822 | 70.00th=[14484], 80.00th=[16581], 90.00th=[20579], 95.00th=[30802], 00:20:56.823 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:20:56.823 | 99.99th=[54264] 00:20:56.823 write: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1049msec); 0 zone resets 00:20:56.823 slat (usec): min=2, max=9763, avg=121.06, stdev=576.14 00:20:56.823 clat (usec): min=1450, max=62640, avg=17224.19, stdev=11011.36 00:20:56.823 lat (usec): min=1465, max=62655, avg=17345.25, stdev=11080.56 00:20:56.823 clat percentiles (usec): 00:20:56.823 | 1.00th=[ 3752], 5.00th=[ 6915], 10.00th=[ 8979], 20.00th=[11207], 00:20:56.823 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13173], 60.00th=[13698], 00:20:56.823 | 70.00th=[14877], 80.00th=[23200], 90.00th=[35914], 95.00th=[42730], 00:20:56.823 | 99.00th=[57934], 99.50th=[61604], 99.90th=[62653], 99.95th=[62653], 00:20:56.823 | 99.99th=[62653] 00:20:56.823 bw ( KiB/s): min=12528, max=20240, per=24.13%, avg=16384.00, stdev=5453.21, samples=2 00:20:56.823 iops : min= 3132, max= 5060, avg=4096.00, stdev=1363.30, samples=2 00:20:56.823 lat (msec) : 2=0.02%, 4=0.59%, 10=12.30%, 20=70.48%, 50=14.49% 00:20:56.823 lat (msec) : 100=2.11% 00:20:56.823 cpu : usr=3.15%, sys=4.58%, ctx=535, majf=0, minf=1 00:20:56.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:56.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.824 issued rwts: total=4088,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.824 job2: (groupid=0, jobs=1): err= 0: pid=943078: Fri Jul 12 11:26:43 2024 00:20:56.824 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:20:56.824 slat (nsec): min=1097, max=13738k, avg=91819.12, stdev=749566.68 00:20:56.824 clat (usec): min=1752, max=57429, avg=13971.65, stdev=4529.23 00:20:56.824 lat (usec): min=1759, max=57437, avg=14063.47, stdev=4572.81 00:20:56.824 clat percentiles (usec): 00:20:56.824 | 1.00th=[ 4424], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[11600], 00:20:56.824 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:20:56.824 | 70.00th=[14484], 80.00th=[16909], 90.00th=[19268], 95.00th=[21365], 00:20:56.824 | 99.00th=[25822], 99.50th=[25822], 99.90th=[57410], 99.95th=[57410], 00:20:56.824 | 99.99th=[57410] 00:20:56.825 write: IOPS=5000, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1006msec); 0 zone resets 00:20:56.825 slat (nsec): min=1968, max=18769k, avg=81010.10, stdev=599366.12 00:20:56.825 clat (usec): min=541, max=46052, avg=12608.50, stdev=4865.88 00:20:56.825 lat (usec): min=554, max=46060, avg=12689.51, stdev=4913.94 00:20:56.825 clat percentiles (usec): 00:20:56.825 | 1.00th=[ 3261], 5.00th=[ 5800], 10.00th=[ 7504], 20.00th=[10159], 00:20:56.825 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12387], 60.00th=[12780], 00:20:56.825 | 70.00th=[13566], 80.00th=[13829], 90.00th=[15401], 95.00th=[22152], 00:20:56.825 | 99.00th=[33424], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:20:56.825 | 99.99th=[45876] 00:20:56.825 bw ( KiB/s): min=18736, max=20480, per=28.88%, avg=19608.00, stdev=1233.19, samples=2 00:20:56.825 iops : min= 4684, max= 5120, avg=4902.00, stdev=308.30, samples=2 00:20:56.825 lat (usec) : 750=0.05% 00:20:56.825 lat (msec) : 2=0.02%, 4=1.26%, 10=13.50%, 20=77.92%, 50=7.12% 00:20:56.825 lat (msec) : 100=0.13% 00:20:56.825 cpu : usr=3.48%, sys=5.07%, ctx=475, majf=0, minf=1 00:20:56.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:56.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.825 issued rwts: total=4608,5030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.825 job3: (groupid=0, jobs=1): err= 0: pid=943079: Fri Jul 12 11:26:43 2024 00:20:56.826 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:20:56.826 slat (nsec): min=1097, max=18620k, avg=149531.16, stdev=1120408.86 00:20:56.826 clat (usec): min=5478, max=57585, avg=18918.17, stdev=9711.09 00:20:56.826 lat (usec): min=5485, max=57617, avg=19067.70, stdev=9797.11 00:20:56.826 clat percentiles (usec): 00:20:56.826 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[11207], 20.00th=[13173], 00:20:56.826 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14353], 60.00th=[16450], 00:20:56.826 | 70.00th=[19268], 80.00th=[21890], 90.00th=[35914], 95.00th=[43779], 00:20:56.826 | 99.00th=[47449], 99.50th=[49546], 99.90th=[54264], 99.95th=[54264], 00:20:56.826 | 99.99th=[57410] 00:20:56.826 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1006msec); 0 zone resets 00:20:56.826 slat (nsec): min=1955, max=11289k, avg=103955.73, stdev=716178.38 00:20:56.826 clat (usec): min=1425, max=51696, avg=14676.02, stdev=6432.60 00:20:56.826 lat (usec): min=1865, max=51707, avg=14779.97, stdev=6462.96 00:20:56.826 clat percentiles (usec): 00:20:56.826 | 1.00th=[ 3621], 5.00th=[ 5735], 10.00th=[ 8455], 20.00th=[11076], 00:20:56.826 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13435], 60.00th=[14484], 00:20:56.826 | 70.00th=[16057], 80.00th=[16909], 90.00th=[21365], 95.00th=[25560], 00:20:56.826 | 99.00th=[47973], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:20:56.826 | 99.99th=[51643] 00:20:56.826 bw ( KiB/s): min=13560, max=17992, per=23.24%, avg=15776.00, stdev=3133.90, samples=2 00:20:56.826 iops : min= 3390, max= 4498, avg=3944.00, stdev=783.47, samples=2 00:20:56.826 lat (msec) : 2=0.20%, 4=0.80%, 10=9.04%, 20=71.32%, 50=18.14% 00:20:56.826 lat (msec) : 100=0.51% 00:20:56.826 cpu : usr=2.89%, sys=3.58%, ctx=281, majf=0, minf=1 00:20:56.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:56.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.827 issued rwts: total=3584,4072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.827 00:20:56.827 Run status group 0 (all jobs): 00:20:56.827 READ: bw=62.3MiB/s (65.4MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=65.4MiB (68.6MB), run=1006-1049msec 00:20:56.827 WRITE: bw=66.3MiB/s (69.5MB/s), 15.3MiB/s-19.5MiB/s (16.0MB/s-20.5MB/s), io=69.6MiB (72.9MB), run=1006-1049msec 00:20:56.827 00:20:56.827 Disk stats (read/write): 00:20:56.827 nvme0n1: ios=3699/4096, merge=0/0, ticks=23899/26024, in_queue=49923, util=98.50% 00:20:56.827 nvme0n2: ios=3122/3527, merge=0/0, ticks=43486/62955, in_queue=106441, util=98.68% 00:20:56.827 nvme0n3: ios=4126/4103, merge=0/0, ticks=53637/46478, in_queue=100115, util=97.82% 00:20:56.827 nvme0n4: ios=3119/3384, merge=0/0, ticks=37102/30616, in_queue=67718, util=96.13% 00:20:56.827 11:26:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:57.113 [global] 00:20:57.113 thread=1 00:20:57.113 invalidate=1 00:20:57.113 rw=randwrite 00:20:57.113 time_based=1 00:20:57.113 runtime=1 00:20:57.113 ioengine=libaio 00:20:57.113 direct=1 00:20:57.113 bs=4096 00:20:57.113 iodepth=128 00:20:57.113 norandommap=0 00:20:57.113 numjobs=1 00:20:57.113 00:20:57.113 verify_dump=1 00:20:57.113 verify_backlog=512 00:20:57.113 verify_state_save=0 00:20:57.113 do_verify=1 00:20:57.113 verify=crc32c-intel 00:20:57.113 [job0] 00:20:57.113 filename=/dev/nvme0n1 00:20:57.113 [job1] 00:20:57.113 filename=/dev/nvme0n2 00:20:57.113 [job2] 00:20:57.113 filename=/dev/nvme0n3 00:20:57.113 [job3] 00:20:57.113 filename=/dev/nvme0n4 00:20:57.113 Could not set queue depth (nvme0n1) 00:20:57.113 Could not set queue depth (nvme0n2) 00:20:57.113 Could not set queue depth (nvme0n3) 00:20:57.113 Could not set queue depth (nvme0n4) 00:20:57.375 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:57.375 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:57.375 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:57.375 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:57.375 fio-3.35 00:20:57.375 Starting 4 threads 00:20:58.741 00:20:58.741 job0: (groupid=0, jobs=1): err= 0: pid=943450: Fri Jul 12 11:26:44 2024 00:20:58.741 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:20:58.741 slat (nsec): min=1095, max=22117k, avg=122616.69, stdev=859291.79 00:20:58.741 clat (usec): min=4855, max=57183, avg=15163.74, stdev=7427.37 00:20:58.741 lat (usec): min=4874, max=57207, avg=15286.35, stdev=7513.76 00:20:58.741 clat percentiles (usec): 00:20:58.741 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10028], 00:20:58.741 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12125], 60.00th=[13566], 00:20:58.741 | 70.00th=[15926], 80.00th=[18744], 90.00th=[28181], 95.00th=[30278], 00:20:58.741 | 99.00th=[35390], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:20:58.741 | 99.99th=[57410] 00:20:58.741 write: IOPS=3831, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1010msec); 0 zone resets 00:20:58.741 slat (usec): min=2, max=24862, avg=138.64, stdev=966.34 00:20:58.741 clat (usec): min=1987, max=97154, avg=18959.12, stdev=15889.69 00:20:58.741 lat (usec): min=2025, max=97160, avg=19097.76, stdev=15983.56 00:20:58.741 clat percentiles (usec): 00:20:58.741 | 1.00th=[ 4752], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[10552], 00:20:58.741 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13960], 00:20:58.741 | 70.00th=[17171], 80.00th=[21365], 90.00th=[38536], 95.00th=[55837], 00:20:58.741 | 99.00th=[87557], 99.50th=[89654], 99.90th=[96994], 99.95th=[96994], 00:20:58.741 | 99.99th=[96994] 00:20:58.741 bw ( KiB/s): min=13568, max=16376, per=24.78%, avg=14972.00, stdev=1985.56, samples=2 00:20:58.741 iops : min= 3392, max= 4094, avg=3743.00, stdev=496.39, samples=2 00:20:58.741 lat (msec) : 2=0.01%, 4=0.31%, 10=17.20%, 20=60.88%, 50=18.23% 00:20:58.741 lat (msec) : 100=3.37% 00:20:58.741 cpu : usr=3.67%, sys=3.77%, ctx=309, majf=0, minf=1 00:20:58.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:58.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.741 issued rwts: total=3584,3870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.741 job1: (groupid=0, jobs=1): err= 0: pid=943451: Fri Jul 12 11:26:44 2024 00:20:58.741 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:20:58.741 slat (nsec): min=1193, max=9972.8k, avg=120945.45, stdev=772761.75 00:20:58.741 clat (usec): min=1310, max=45898, avg=15529.72, stdev=5033.44 00:20:58.741 lat (usec): min=1330, max=45924, avg=15650.67, stdev=5119.70 00:20:58.741 clat percentiles (usec): 00:20:58.741 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[10028], 20.00th=[10814], 00:20:58.741 | 30.00th=[14091], 40.00th=[14877], 50.00th=[16450], 60.00th=[16909], 00:20:58.742 | 70.00th=[17171], 80.00th=[18220], 90.00th=[19530], 95.00th=[20579], 00:20:58.742 | 99.00th=[36963], 99.50th=[36963], 99.90th=[39584], 99.95th=[41157], 00:20:58.742 | 99.99th=[45876] 00:20:58.742 write: IOPS=3876, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1004msec); 0 zone resets 00:20:58.742 slat (nsec): min=1713, max=12979k, avg=118896.02, stdev=692290.81 00:20:58.742 clat (usec): min=285, max=78484, avg=18303.19, stdev=12336.24 00:20:58.742 lat (usec): min=311, max=78493, avg=18422.09, stdev=12371.18 00:20:58.742 clat percentiles (usec): 00:20:58.742 | 1.00th=[ 1336], 5.00th=[ 2802], 10.00th=[ 4178], 20.00th=[ 8848], 00:20:58.742 | 30.00th=[12649], 40.00th=[13960], 50.00th=[14877], 60.00th=[19530], 00:20:58.742 | 70.00th=[23725], 80.00th=[25822], 90.00th=[30540], 95.00th=[33817], 00:20:58.742 | 99.00th=[69731], 99.50th=[74974], 99.90th=[78119], 99.95th=[78119], 00:20:58.742 | 99.99th=[78119] 00:20:58.742 bw ( KiB/s): min=11952, max=18168, per=24.93%, avg=15060.00, stdev=4395.38, samples=2 00:20:58.742 iops : min= 2988, max= 4542, avg=3765.00, stdev=1098.84, samples=2 00:20:58.742 lat (usec) : 500=0.09%, 750=0.08%, 1000=0.03% 00:20:58.742 lat (msec) : 2=0.76%, 4=3.38%, 10=12.23%, 20=60.89%, 50=21.07% 00:20:58.742 lat (msec) : 100=1.47% 00:20:58.742 cpu : usr=2.79%, sys=4.79%, ctx=353, majf=0, minf=1 00:20:58.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:58.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.742 issued rwts: total=3584,3892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.742 job2: (groupid=0, jobs=1): err= 0: pid=943452: Fri Jul 12 11:26:44 2024 00:20:58.742 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:20:58.742 slat (nsec): min=1095, max=16858k, avg=157147.90, stdev=1137192.09 00:20:58.742 clat (usec): min=3900, max=60862, avg=18687.31, stdev=13880.50 00:20:58.742 lat (usec): min=3907, max=60868, avg=18844.46, stdev=13980.82 00:20:58.742 clat percentiles (usec): 00:20:58.742 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11076], 00:20:58.742 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[13042], 00:20:58.742 | 70.00th=[14746], 80.00th=[23462], 90.00th=[43254], 95.00th=[56886], 00:20:58.742 | 99.00th=[60556], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:20:58.742 | 99.99th=[61080] 00:20:58.742 write: IOPS=3568, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:20:58.742 slat (usec): min=2, max=13071, avg=115.67, stdev=635.10 00:20:58.742 clat (usec): min=1488, max=59301, avg=16832.79, stdev=9954.88 00:20:58.742 lat (usec): min=1499, max=59310, avg=16948.46, stdev=10011.60 00:20:58.742 clat percentiles (usec): 00:20:58.742 | 1.00th=[ 5407], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:20:58.742 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11994], 60.00th=[14353], 00:20:58.742 | 70.00th=[19268], 80.00th=[23725], 90.00th=[30016], 95.00th=[38536], 00:20:58.742 | 99.00th=[52167], 99.50th=[52691], 99.90th=[58983], 99.95th=[59507], 00:20:58.742 | 99.99th=[59507] 00:20:58.742 bw ( KiB/s): min=12288, max=16384, per=23.73%, avg=14336.00, stdev=2896.31, samples=2 00:20:58.742 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:20:58.742 lat (msec) : 2=0.04%, 4=0.39%, 10=8.72%, 20=65.37%, 50=20.18% 00:20:58.742 lat (msec) : 100=5.30% 00:20:58.742 cpu : usr=2.59%, sys=4.18%, ctx=419, majf=0, minf=1 00:20:58.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:58.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.742 issued rwts: total=3584,3586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.742 job3: (groupid=0, jobs=1): err= 0: pid=943453: Fri Jul 12 11:26:44 2024 00:20:58.742 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:20:58.742 slat (usec): min=2, max=8696, avg=131.20, stdev=721.07 00:20:58.742 clat (usec): min=6916, max=42380, avg=16143.80, stdev=5620.46 00:20:58.742 lat (usec): min=6922, max=42390, avg=16275.00, stdev=5670.02 00:20:58.742 clat percentiles (usec): 00:20:58.742 | 1.00th=[ 7308], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:20:58.742 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13960], 60.00th=[17171], 00:20:58.742 | 70.00th=[20055], 80.00th=[22414], 90.00th=[22938], 95.00th=[24773], 00:20:58.742 | 99.00th=[30278], 99.50th=[31589], 99.90th=[42206], 99.95th=[42206], 00:20:58.742 | 99.99th=[42206] 00:20:58.742 write: IOPS=3885, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1005msec); 0 zone resets 00:20:58.742 slat (usec): min=2, max=24447, avg=128.75, stdev=961.18 00:20:58.742 clat (usec): min=1916, max=64809, avg=17628.94, stdev=10548.57 00:20:58.742 lat (usec): min=5814, max=64813, avg=17757.69, stdev=10621.42 00:20:58.742 clat percentiles (usec): 00:20:58.742 | 1.00th=[ 6718], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11469], 00:20:58.742 | 30.00th=[11731], 40.00th=[11994], 50.00th=[13960], 60.00th=[14615], 00:20:58.742 | 70.00th=[17695], 80.00th=[21627], 90.00th=[34341], 95.00th=[40633], 00:20:58.742 | 99.00th=[60556], 99.50th=[63177], 99.90th=[64750], 99.95th=[64750], 00:20:58.742 | 99.99th=[64750] 00:20:58.742 bw ( KiB/s): min=13456, max=16760, per=25.01%, avg=15108.00, stdev=2336.28, samples=2 00:20:58.742 iops : min= 3364, max= 4190, avg=3777.00, stdev=584.07, samples=2 00:20:58.742 lat (msec) : 2=0.01%, 10=10.63%, 20=62.79%, 50=25.36%, 100=1.22% 00:20:58.742 cpu : usr=3.98%, sys=5.48%, ctx=267, majf=0, minf=1 00:20:58.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:58.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.742 issued rwts: total=3584,3905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.742 00:20:58.742 Run status group 0 (all jobs): 00:20:58.742 READ: bw=55.4MiB/s (58.1MB/s), 13.9MiB/s-13.9MiB/s (14.5MB/s-14.6MB/s), io=56.0MiB (58.7MB), run=1004-1010msec 00:20:58.742 WRITE: bw=59.0MiB/s (61.9MB/s), 13.9MiB/s-15.2MiB/s (14.6MB/s-15.9MB/s), io=59.6MiB (62.5MB), run=1004-1010msec 00:20:58.742 00:20:58.742 Disk stats (read/write): 00:20:58.742 nvme0n1: ios=3104/3311, merge=0/0, ticks=30110/38874, in_queue=68984, util=98.10% 00:20:58.742 nvme0n2: ios=3092/3319, merge=0/0, ticks=27620/32267, in_queue=59887, util=87.21% 00:20:58.742 nvme0n3: ios=3096/3143, merge=0/0, ticks=28744/21748, in_queue=50492, util=98.34% 00:20:58.742 nvme0n4: ios=3112/3473, merge=0/0, ticks=21776/26789, in_queue=48565, util=98.32% 00:20:58.742 11:26:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:20:58.742 11:26:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=943680 00:20:58.742 11:26:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:20:58.742 11:26:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:58.742 [global] 00:20:58.742 thread=1 00:20:58.742 invalidate=1 00:20:58.742 rw=read 00:20:58.742 time_based=1 00:20:58.742 runtime=10 00:20:58.742 ioengine=libaio 00:20:58.742 direct=1 00:20:58.742 bs=4096 00:20:58.742 iodepth=1 00:20:58.742 norandommap=1 00:20:58.742 numjobs=1 00:20:58.742 00:20:58.742 [job0] 00:20:58.742 filename=/dev/nvme0n1 00:20:58.742 [job1] 00:20:58.742 filename=/dev/nvme0n2 00:20:58.742 [job2] 00:20:58.742 filename=/dev/nvme0n3 00:20:58.742 [job3] 00:20:58.742 filename=/dev/nvme0n4 00:20:58.742 Could not set queue depth (nvme0n1) 00:20:58.742 Could not set queue depth (nvme0n2) 00:20:58.742 Could not set queue depth (nvme0n3) 00:20:58.742 Could not set queue depth (nvme0n4) 00:20:58.742 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.742 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.742 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.742 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.742 fio-3.35 00:20:58.742 Starting 4 threads 00:21:02.019 11:26:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:02.019 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=18866176, buflen=4096 00:21:02.019 fio: pid=943827, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:02.019 11:26:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:02.019 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=16580608, buflen=4096 00:21:02.019 fio: pid=943826, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:02.019 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:02.019 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:02.019 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=43401216, buflen=4096 00:21:02.019 fio: pid=943824, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:02.019 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:02.019 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:02.277 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20103168, buflen=4096 00:21:02.277 fio: pid=943825, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:02.277 00:21:02.277 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=943824: Fri Jul 12 11:26:48 2024 00:21:02.277 read: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(41.4MiB/3087msec) 00:21:02.277 slat (usec): min=5, max=27690, avg=13.78, stdev=331.20 00:21:02.277 clat (usec): min=186, max=41422, avg=273.64, stdev=812.38 00:21:02.277 lat (usec): min=195, max=41430, avg=287.42, stdev=878.32 00:21:02.277 clat percentiles (usec): 00:21:02.277 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 235], 00:21:02.277 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:21:02.277 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 314], 00:21:02.277 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 734], 99.95th=[ 8717], 00:21:02.277 | 99.99th=[41157] 00:21:02.277 bw ( KiB/s): min=11848, max=16256, per=48.94%, avg=14137.60, stdev=1571.05, samples=5 00:21:02.277 iops : min= 2962, max= 4064, avg=3534.40, stdev=392.76, samples=5 00:21:02.277 lat (usec) : 250=49.33%, 500=50.54%, 750=0.03%, 1000=0.01% 00:21:02.277 lat (msec) : 2=0.01%, 10=0.03%, 20=0.01%, 50=0.04% 00:21:02.277 cpu : usr=2.04%, sys=5.25%, ctx=10603, majf=0, minf=1 00:21:02.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 issued rwts: total=10597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:02.277 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=943825: Fri Jul 12 11:26:48 2024 00:21:02.277 read: IOPS=1467, BW=5869KiB/s (6010kB/s)(19.2MiB/3345msec) 00:21:02.277 slat (usec): min=6, max=15799, avg=20.55, stdev=416.24 00:21:02.277 clat (usec): min=195, max=42042, avg=654.85, stdev=4032.83 00:21:02.277 lat (usec): min=202, max=42063, avg=674.23, stdev=4054.03 00:21:02.277 clat percentiles (usec): 00:21:02.277 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:21:02.277 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:21:02.277 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:21:02.277 | 99.00th=[ 1598], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:21:02.277 | 99.99th=[42206] 00:21:02.277 bw ( KiB/s): min= 96, max=15352, per=17.54%, avg=5067.83, stdev=6655.77, samples=6 00:21:02.277 iops : min= 24, max= 3838, avg=1266.83, stdev=1663.80, samples=6 00:21:02.277 lat (usec) : 250=55.78%, 500=42.90%, 750=0.26%, 1000=0.02% 00:21:02.277 lat (msec) : 2=0.02%, 50=1.00% 00:21:02.277 cpu : usr=0.81%, sys=1.61%, ctx=4916, majf=0, minf=1 00:21:02.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 issued rwts: total=4909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:02.277 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=943826: Fri Jul 12 11:26:48 2024 00:21:02.277 read: IOPS=1399, BW=5595KiB/s (5729kB/s)(15.8MiB/2894msec) 00:21:02.277 slat (usec): min=7, max=12828, avg=13.78, stdev=234.77 00:21:02.277 clat (usec): min=215, max=41780, avg=693.87, stdev=3926.14 00:21:02.277 lat (usec): min=223, max=41789, avg=705.76, stdev=3931.32 00:21:02.277 clat percentiles (usec): 00:21:02.277 | 1.00th=[ 237], 5.00th=[ 260], 10.00th=[ 285], 20.00th=[ 293], 00:21:02.277 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:21:02.277 | 70.00th=[ 318], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 383], 00:21:02.277 | 99.00th=[ 502], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:02.277 | 99.99th=[41681] 00:21:02.277 bw ( KiB/s): min= 96, max=12624, per=18.52%, avg=5350.40, stdev=5997.30, samples=5 00:21:02.277 iops : min= 24, max= 3156, avg=1337.60, stdev=1499.32, samples=5 00:21:02.277 lat (usec) : 250=3.16%, 500=95.80%, 750=0.07% 00:21:02.277 lat (msec) : 50=0.94% 00:21:02.277 cpu : usr=0.76%, sys=2.35%, ctx=4052, majf=0, minf=1 00:21:02.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 issued rwts: total=4049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:02.277 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=943827: Fri Jul 12 11:26:48 2024 00:21:02.277 read: IOPS=1698, BW=6791KiB/s (6954kB/s)(18.0MiB/2713msec) 00:21:02.277 slat (nsec): min=6269, max=30444, avg=7646.60, stdev=1609.20 00:21:02.277 clat (usec): min=205, max=41950, avg=574.71, stdev=3443.75 00:21:02.277 lat (usec): min=212, max=41973, avg=582.36, stdev=3444.67 00:21:02.277 clat percentiles (usec): 00:21:02.277 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:21:02.277 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:21:02.277 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 343], 95.00th=[ 375], 00:21:02.277 | 99.00th=[ 445], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:21:02.277 | 99.99th=[42206] 00:21:02.277 bw ( KiB/s): min= 96, max=14768, per=22.95%, avg=6630.40, stdev=6989.91, samples=5 00:21:02.277 iops : min= 24, max= 3692, avg=1657.60, stdev=1747.48, samples=5 00:21:02.277 lat (usec) : 250=14.48%, 500=84.70%, 750=0.09% 00:21:02.277 lat (msec) : 50=0.72% 00:21:02.277 cpu : usr=0.59%, sys=1.47%, ctx=4608, majf=0, minf=2 00:21:02.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.277 issued rwts: total=4607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:02.277 00:21:02.277 Run status group 0 (all jobs): 00:21:02.277 READ: bw=28.2MiB/s (29.6MB/s), 5595KiB/s-13.4MiB/s (5729kB/s-14.1MB/s), io=94.4MiB (99.0MB), run=2713-3345msec 00:21:02.277 00:21:02.277 Disk stats (read/write): 00:21:02.277 nvme0n1: ios=10024/0, merge=0/0, ticks=2778/0, in_queue=2778, util=98.40% 00:21:02.277 nvme0n2: ios=4132/0, merge=0/0, ticks=3199/0, in_queue=3199, util=97.74% 00:21:02.277 nvme0n3: ios=4091/0, merge=0/0, ticks=3761/0, in_queue=3761, util=99.26% 00:21:02.277 nvme0n4: ios=4407/0, merge=0/0, ticks=3261/0, in_queue=3261, util=99.37% 00:21:02.277 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:02.277 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:02.535 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:02.535 11:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:02.792 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:02.792 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:03.050 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:03.050 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:03.307 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:03.307 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:03.564 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:21:03.564 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 943680 00:21:03.564 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:21:03.564 11:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:04.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:04.936 nvmf hotplug test: fio failed as expected 00:21:04.936 11:26:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.936 rmmod nvme_tcp 00:21:04.936 rmmod nvme_fabrics 00:21:04.936 rmmod nvme_keyring 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 940761 ']' 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 940761 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 940761 ']' 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 940761 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.936 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940761 00:21:04.937 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:04.937 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:04.937 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940761' 00:21:04.937 killing process with pid 940761 00:21:04.937 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 940761 00:21:04.937 11:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 940761 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.310 11:26:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.840 11:26:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.840 00:21:08.840 real 0m29.020s 00:21:08.840 user 1m55.932s 00:21:08.840 sys 0m7.889s 00:21:08.840 11:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:08.840 11:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.840 ************************************ 00:21:08.840 END TEST nvmf_fio_target 00:21:08.840 ************************************ 00:21:08.840 11:26:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:08.840 11:26:54 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:08.840 11:26:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:08.840 11:26:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.840 11:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.840 ************************************ 00:21:08.840 START TEST nvmf_bdevio 00:21:08.840 ************************************ 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:08.840 * Looking for test storage... 00:21:08.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.840 11:26:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.841 11:26:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.841 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.841 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.841 11:26:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.841 11:26:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:13.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:13.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:13.019 Found net devices under 0000:86:00.0: cvl_0_0 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:13.019 Found net devices under 0000:86:00.1: cvl_0_1 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:21:13.019 00:21:13.019 --- 10.0.0.2 ping statistics --- 00:21:13.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.019 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:21:13.019 00:21:13.019 --- 10.0.0.1 ping statistics --- 00:21:13.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.019 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.019 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=948340 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 948340 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 948340 ']' 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:13.276 11:26:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:13.276 [2024-07-12 11:26:59.453796] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:13.276 [2024-07-12 11:26:59.453882] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.276 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.276 [2024-07-12 11:26:59.565229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.532 [2024-07-12 11:26:59.780263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.532 [2024-07-12 11:26:59.780311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.532 [2024-07-12 11:26:59.780323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.532 [2024-07-12 11:26:59.780331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.532 [2024-07-12 11:26:59.780340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.532 [2024-07-12 11:26:59.780507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:13.532 [2024-07-12 11:26:59.780594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:13.532 [2024-07-12 11:26:59.780661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.532 [2024-07-12 11:26:59.780686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 [2024-07-12 11:27:00.281643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 Malloc0 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 [2024-07-12 11:27:00.396399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.096 { 00:21:14.096 "params": { 00:21:14.096 "name": "Nvme$subsystem", 00:21:14.096 "trtype": "$TEST_TRANSPORT", 00:21:14.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.096 "adrfam": "ipv4", 00:21:14.096 "trsvcid": "$NVMF_PORT", 00:21:14.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.096 "hdgst": ${hdgst:-false}, 00:21:14.096 "ddgst": ${ddgst:-false} 00:21:14.096 }, 00:21:14.096 "method": "bdev_nvme_attach_controller" 00:21:14.096 } 00:21:14.096 EOF 00:21:14.096 )") 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:21:14.096 11:27:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:14.096 "params": { 00:21:14.096 "name": "Nvme1", 00:21:14.096 "trtype": "tcp", 00:21:14.096 "traddr": "10.0.0.2", 00:21:14.096 "adrfam": "ipv4", 00:21:14.096 "trsvcid": "4420", 00:21:14.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.096 "hdgst": false, 00:21:14.096 "ddgst": false 00:21:14.096 }, 00:21:14.096 "method": "bdev_nvme_attach_controller" 00:21:14.096 }' 00:21:14.353 [2024-07-12 11:27:00.470021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:14.353 [2024-07-12 11:27:00.470108] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948564 ] 00:21:14.353 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.353 [2024-07-12 11:27:00.575672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:14.610 [2024-07-12 11:27:00.818613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.610 [2024-07-12 11:27:00.818679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.610 [2024-07-12 11:27:00.818686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.184 I/O targets: 00:21:15.184 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:15.184 00:21:15.184 00:21:15.184 CUnit - A unit testing framework for C - Version 2.1-3 00:21:15.184 http://cunit.sourceforge.net/ 00:21:15.184 00:21:15.184 00:21:15.184 Suite: bdevio tests on: Nvme1n1 00:21:15.184 Test: blockdev write read block ...passed 00:21:15.184 Test: blockdev write zeroes read block ...passed 00:21:15.184 Test: blockdev write zeroes read no split ...passed 00:21:15.184 Test: blockdev write zeroes read split ...passed 00:21:15.441 Test: blockdev write zeroes read split partial ...passed 00:21:15.441 Test: blockdev reset ...[2024-07-12 11:27:01.544156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:15.441 [2024-07-12 11:27:01.544279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:21:15.441 [2024-07-12 11:27:01.560792] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:15.441 passed 00:21:15.441 Test: blockdev write read 8 blocks ...passed 00:21:15.441 Test: blockdev write read size > 128k ...passed 00:21:15.441 Test: blockdev write read invalid size ...passed 00:21:15.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:15.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:15.441 Test: blockdev write read max offset ...passed 00:21:15.441 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:15.441 Test: blockdev writev readv 8 blocks ...passed 00:21:15.441 Test: blockdev writev readv 30 x 1block ...passed 00:21:15.441 Test: blockdev writev readv block ...passed 00:21:15.441 Test: blockdev writev readv size > 128k ...passed 00:21:15.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:15.441 Test: blockdev comparev and writev ...[2024-07-12 11:27:01.732735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.732783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:15.441 [2024-07-12 11:27:01.732807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.732819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:15.441 [2024-07-12 11:27:01.733142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.733159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:15.441 [2024-07-12 11:27:01.733178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.733191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:15.441 [2024-07-12 11:27:01.733499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.733522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:15.441 [2024-07-12 11:27:01.733538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.733550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:15.441 [2024-07-12 11:27:01.733845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.733862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:15.441 [2024-07-12 11:27:01.733879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:15.441 [2024-07-12 11:27:01.733890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:15.441 passed 00:21:15.698 Test: blockdev nvme passthru rw ...passed 00:21:15.698 Test: blockdev nvme passthru vendor specific ...[2024-07-12 11:27:01.815838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:15.698 [2024-07-12 11:27:01.815878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:15.698 [2024-07-12 11:27:01.816051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:15.698 [2024-07-12 11:27:01.816067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:15.698 [2024-07-12 11:27:01.816202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:15.698 [2024-07-12 11:27:01.816216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:15.698 [2024-07-12 11:27:01.816347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:15.698 [2024-07-12 11:27:01.816362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:15.698 passed 00:21:15.698 Test: blockdev nvme admin passthru ...passed 00:21:15.698 Test: blockdev copy ...passed 00:21:15.698 00:21:15.698 Run Summary: Type Total Ran Passed Failed Inactive 00:21:15.698 suites 1 1 n/a 0 0 00:21:15.698 tests 23 23 23 0 0 00:21:15.698 asserts 152 152 152 0 n/a 00:21:15.698 00:21:15.698 Elapsed time = 1.172 seconds 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.627 11:27:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:16.627 rmmod nvme_tcp 00:21:16.884 rmmod nvme_fabrics 00:21:16.884 rmmod nvme_keyring 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 948340 ']' 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 948340 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 948340 ']' 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 948340 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 948340 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 948340' 00:21:16.884 killing process with pid 948340 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 948340 00:21:16.884 11:27:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 948340 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.779 11:27:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.678 11:27:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.678 00:21:20.678 real 0m11.967s 00:21:20.678 user 0m23.810s 00:21:20.678 sys 0m4.085s 00:21:20.678 11:27:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:20.678 11:27:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:20.678 ************************************ 00:21:20.678 END TEST nvmf_bdevio 00:21:20.678 ************************************ 00:21:20.678 11:27:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:20.678 11:27:06 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:20.678 11:27:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:20.678 11:27:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.678 11:27:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.678 ************************************ 00:21:20.678 START TEST nvmf_auth_target 00:21:20.678 ************************************ 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:20.678 * Looking for test storage... 00:21:20.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.678 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.679 11:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.942 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.942 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:25.942 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:25.943 11:27:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:25.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:21:25.943 00:21:25.943 --- 10.0.0.2 ping statistics --- 00:21:25.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.943 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:21:25.943 00:21:25.943 --- 10.0.0.1 ping statistics --- 00:21:25.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.943 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=953124 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 953124 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 953124 ']' 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.943 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=953282 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8eb6960d947097580bf400c166beee1b6655ab5f4e493e63 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5Yw 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8eb6960d947097580bf400c166beee1b6655ab5f4e493e63 0 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8eb6960d947097580bf400c166beee1b6655ab5f4e493e63 0 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8eb6960d947097580bf400c166beee1b6655ab5f4e493e63 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:21:26.884 11:27:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5Yw 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5Yw 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.5Yw 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e38b0bafc20a066ad3a4c2262b55cac66d94bb05213d429dd37f2015a5120bdd 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zjH 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e38b0bafc20a066ad3a4c2262b55cac66d94bb05213d429dd37f2015a5120bdd 3 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e38b0bafc20a066ad3a4c2262b55cac66d94bb05213d429dd37f2015a5120bdd 3 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e38b0bafc20a066ad3a4c2262b55cac66d94bb05213d429dd37f2015a5120bdd 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zjH 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zjH 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.zjH 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1e9ff47a1140507785cf0f7b2bf5dc6d 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RsA 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1e9ff47a1140507785cf0f7b2bf5dc6d 1 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1e9ff47a1140507785cf0f7b2bf5dc6d 1 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1e9ff47a1140507785cf0f7b2bf5dc6d 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RsA 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RsA 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.RsA 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9f99cf9153a88f4c2c5972f5d9d3317e12c1695132cbaf3a 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DOI 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9f99cf9153a88f4c2c5972f5d9d3317e12c1695132cbaf3a 2 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9f99cf9153a88f4c2c5972f5d9d3317e12c1695132cbaf3a 2 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9f99cf9153a88f4c2c5972f5d9d3317e12c1695132cbaf3a 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DOI 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DOI 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.DOI 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cfec287d365de1b90039af7943c52fc610907d2428b1db17 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.X8M 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cfec287d365de1b90039af7943c52fc610907d2428b1db17 2 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cfec287d365de1b90039af7943c52fc610907d2428b1db17 2 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cfec287d365de1b90039af7943c52fc610907d2428b1db17 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:26.884 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.X8M 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.X8M 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.X8M 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:27.141 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=63dc702b6be2ff68da0aeb7a504b62fb 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9Cx 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 63dc702b6be2ff68da0aeb7a504b62fb 1 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 63dc702b6be2ff68da0aeb7a504b62fb 1 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=63dc702b6be2ff68da0aeb7a504b62fb 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9Cx 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9Cx 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.9Cx 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=609701b06b9e25e75dd69525b6703b8b5c9d03c1bdfc40d83fb118f9f56e2acc 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zeA 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 609701b06b9e25e75dd69525b6703b8b5c9d03c1bdfc40d83fb118f9f56e2acc 3 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 609701b06b9e25e75dd69525b6703b8b5c9d03c1bdfc40d83fb118f9f56e2acc 3 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=609701b06b9e25e75dd69525b6703b8b5c9d03c1bdfc40d83fb118f9f56e2acc 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zeA 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zeA 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.zeA 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 953124 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 953124 ']' 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.142 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 953282 /var/tmp/host.sock 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 953282 ']' 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:27.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.400 11:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5Yw 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5Yw 00:21:27.968 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5Yw 00:21:28.226 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.zjH ]] 00:21:28.226 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zjH 00:21:28.226 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.226 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.226 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.226 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zjH 00:21:28.226 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zjH 00:21:28.485 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:28.485 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RsA 00:21:28.485 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.485 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.485 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.485 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RsA 00:21:28.485 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RsA 00:21:28.743 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.DOI ]] 00:21:28.743 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DOI 00:21:28.743 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.743 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.743 11:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.743 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DOI 00:21:28.743 11:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DOI 00:21:28.743 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:28.743 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.X8M 00:21:28.743 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.743 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.743 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.743 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.X8M 00:21:28.743 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.X8M 00:21:29.002 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.9Cx ]] 00:21:29.002 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Cx 00:21:29.002 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.002 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.002 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.002 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Cx 00:21:29.002 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Cx 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zeA 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zeA 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zeA 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:29.298 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.571 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.829 00:21:29.829 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.829 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.829 11:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.830 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.830 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.830 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.830 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.830 11:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.830 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.830 { 00:21:29.830 "cntlid": 1, 00:21:29.830 "qid": 0, 00:21:29.830 "state": "enabled", 00:21:29.830 "thread": "nvmf_tgt_poll_group_000", 00:21:29.830 "listen_address": { 00:21:29.830 "trtype": "TCP", 00:21:29.830 "adrfam": "IPv4", 00:21:29.830 "traddr": "10.0.0.2", 00:21:29.830 "trsvcid": "4420" 00:21:29.830 }, 00:21:29.830 "peer_address": { 00:21:29.830 "trtype": "TCP", 00:21:29.830 "adrfam": "IPv4", 00:21:29.830 "traddr": "10.0.0.1", 00:21:29.830 "trsvcid": "48316" 00:21:29.830 }, 00:21:29.830 "auth": { 00:21:29.830 "state": "completed", 00:21:29.830 "digest": "sha256", 00:21:29.830 "dhgroup": "null" 00:21:29.830 } 00:21:29.830 } 00:21:29.830 ]' 00:21:29.830 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.089 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.089 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.089 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:30.089 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.089 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.089 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.089 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.348 11:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.915 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.916 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.916 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.916 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.916 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.916 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.174 00:21:31.174 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.174 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.174 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.432 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.432 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.432 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.432 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.432 11:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.432 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.432 { 00:21:31.432 "cntlid": 3, 00:21:31.432 "qid": 0, 00:21:31.432 "state": "enabled", 00:21:31.432 "thread": "nvmf_tgt_poll_group_000", 00:21:31.432 "listen_address": { 00:21:31.432 "trtype": "TCP", 00:21:31.432 "adrfam": "IPv4", 00:21:31.432 "traddr": "10.0.0.2", 00:21:31.432 "trsvcid": "4420" 00:21:31.432 }, 00:21:31.432 "peer_address": { 00:21:31.432 "trtype": "TCP", 00:21:31.432 "adrfam": "IPv4", 00:21:31.432 "traddr": "10.0.0.1", 00:21:31.432 "trsvcid": "57110" 00:21:31.432 }, 00:21:31.432 "auth": { 00:21:31.432 "state": "completed", 00:21:31.432 "digest": "sha256", 00:21:31.433 "dhgroup": "null" 00:21:31.433 } 00:21:31.433 } 00:21:31.433 ]' 00:21:31.433 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.433 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.433 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.433 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:31.433 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.692 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.692 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.692 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.692 11:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:32.261 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.520 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.779 00:21:32.779 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.779 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.779 11:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.779 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.779 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.779 11:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.779 11:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.038 { 00:21:33.038 "cntlid": 5, 00:21:33.038 "qid": 0, 00:21:33.038 "state": "enabled", 00:21:33.038 "thread": "nvmf_tgt_poll_group_000", 00:21:33.038 "listen_address": { 00:21:33.038 "trtype": "TCP", 00:21:33.038 "adrfam": "IPv4", 00:21:33.038 "traddr": "10.0.0.2", 00:21:33.038 "trsvcid": "4420" 00:21:33.038 }, 00:21:33.038 "peer_address": { 00:21:33.038 "trtype": "TCP", 00:21:33.038 "adrfam": "IPv4", 00:21:33.038 "traddr": "10.0.0.1", 00:21:33.038 "trsvcid": "57148" 00:21:33.038 }, 00:21:33.038 "auth": { 00:21:33.038 "state": "completed", 00:21:33.038 "digest": "sha256", 00:21:33.038 "dhgroup": "null" 00:21:33.038 } 00:21:33.038 } 00:21:33.038 ]' 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.038 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.297 11:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:21:33.865 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.865 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.865 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.865 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.865 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.865 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.865 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:33.866 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.125 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.125 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.384 { 00:21:34.384 "cntlid": 7, 00:21:34.384 "qid": 0, 00:21:34.384 "state": "enabled", 00:21:34.384 "thread": "nvmf_tgt_poll_group_000", 00:21:34.384 "listen_address": { 00:21:34.384 "trtype": "TCP", 00:21:34.384 "adrfam": "IPv4", 00:21:34.384 "traddr": "10.0.0.2", 00:21:34.384 "trsvcid": "4420" 00:21:34.384 }, 00:21:34.384 "peer_address": { 00:21:34.384 "trtype": "TCP", 00:21:34.384 "adrfam": "IPv4", 00:21:34.384 "traddr": "10.0.0.1", 00:21:34.384 "trsvcid": "57168" 00:21:34.384 }, 00:21:34.384 "auth": { 00:21:34.384 "state": "completed", 00:21:34.384 "digest": "sha256", 00:21:34.384 "dhgroup": "null" 00:21:34.384 } 00:21:34.384 } 00:21:34.384 ]' 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:34.384 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.643 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.643 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.643 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.644 11:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:35.212 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.471 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.729 00:21:35.729 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.729 11:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.729 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.987 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.988 { 00:21:35.988 "cntlid": 9, 00:21:35.988 "qid": 0, 00:21:35.988 "state": "enabled", 00:21:35.988 "thread": "nvmf_tgt_poll_group_000", 00:21:35.988 "listen_address": { 00:21:35.988 "trtype": "TCP", 00:21:35.988 "adrfam": "IPv4", 00:21:35.988 "traddr": "10.0.0.2", 00:21:35.988 "trsvcid": "4420" 00:21:35.988 }, 00:21:35.988 "peer_address": { 00:21:35.988 "trtype": "TCP", 00:21:35.988 "adrfam": "IPv4", 00:21:35.988 "traddr": "10.0.0.1", 00:21:35.988 "trsvcid": "57196" 00:21:35.988 }, 00:21:35.988 "auth": { 00:21:35.988 "state": "completed", 00:21:35.988 "digest": "sha256", 00:21:35.988 "dhgroup": "ffdhe2048" 00:21:35.988 } 00:21:35.988 } 00:21:35.988 ]' 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.988 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.246 11:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:36.821 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.082 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.340 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.340 { 00:21:37.340 "cntlid": 11, 00:21:37.340 "qid": 0, 00:21:37.340 "state": "enabled", 00:21:37.340 "thread": "nvmf_tgt_poll_group_000", 00:21:37.340 "listen_address": { 00:21:37.340 "trtype": "TCP", 00:21:37.340 "adrfam": "IPv4", 00:21:37.340 "traddr": "10.0.0.2", 00:21:37.340 "trsvcid": "4420" 00:21:37.340 }, 00:21:37.340 "peer_address": { 00:21:37.340 "trtype": "TCP", 00:21:37.340 "adrfam": "IPv4", 00:21:37.340 "traddr": "10.0.0.1", 00:21:37.340 "trsvcid": "57228" 00:21:37.340 }, 00:21:37.340 "auth": { 00:21:37.340 "state": "completed", 00:21:37.340 "digest": "sha256", 00:21:37.340 "dhgroup": "ffdhe2048" 00:21:37.340 } 00:21:37.340 } 00:21:37.340 ]' 00:21:37.340 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.598 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.598 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.598 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.598 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.598 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.598 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.598 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.857 11:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.425 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.685 00:21:38.685 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.685 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.685 11:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.944 { 00:21:38.944 "cntlid": 13, 00:21:38.944 "qid": 0, 00:21:38.944 "state": "enabled", 00:21:38.944 "thread": "nvmf_tgt_poll_group_000", 00:21:38.944 "listen_address": { 00:21:38.944 "trtype": "TCP", 00:21:38.944 "adrfam": "IPv4", 00:21:38.944 "traddr": "10.0.0.2", 00:21:38.944 "trsvcid": "4420" 00:21:38.944 }, 00:21:38.944 "peer_address": { 00:21:38.944 "trtype": "TCP", 00:21:38.944 "adrfam": "IPv4", 00:21:38.944 "traddr": "10.0.0.1", 00:21:38.944 "trsvcid": "57252" 00:21:38.944 }, 00:21:38.944 "auth": { 00:21:38.944 "state": "completed", 00:21:38.944 "digest": "sha256", 00:21:38.944 "dhgroup": "ffdhe2048" 00:21:38.944 } 00:21:38.944 } 00:21:38.944 ]' 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.944 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.203 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.203 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.203 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.203 11:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:39.769 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.027 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.285 00:21:40.285 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.285 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.285 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.285 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.285 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.285 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.285 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.543 { 00:21:40.543 "cntlid": 15, 00:21:40.543 "qid": 0, 00:21:40.543 "state": "enabled", 00:21:40.543 "thread": "nvmf_tgt_poll_group_000", 00:21:40.543 "listen_address": { 00:21:40.543 "trtype": "TCP", 00:21:40.543 "adrfam": "IPv4", 00:21:40.543 "traddr": "10.0.0.2", 00:21:40.543 "trsvcid": "4420" 00:21:40.543 }, 00:21:40.543 "peer_address": { 00:21:40.543 "trtype": "TCP", 00:21:40.543 "adrfam": "IPv4", 00:21:40.543 "traddr": "10.0.0.1", 00:21:40.543 "trsvcid": "53046" 00:21:40.543 }, 00:21:40.543 "auth": { 00:21:40.543 "state": "completed", 00:21:40.543 "digest": "sha256", 00:21:40.543 "dhgroup": "ffdhe2048" 00:21:40.543 } 00:21:40.543 } 00:21:40.543 ]' 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.543 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.801 11:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.367 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.625 00:21:41.625 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.625 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.625 11:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.884 { 00:21:41.884 "cntlid": 17, 00:21:41.884 "qid": 0, 00:21:41.884 "state": "enabled", 00:21:41.884 "thread": "nvmf_tgt_poll_group_000", 00:21:41.884 "listen_address": { 00:21:41.884 "trtype": "TCP", 00:21:41.884 "adrfam": "IPv4", 00:21:41.884 "traddr": "10.0.0.2", 00:21:41.884 "trsvcid": "4420" 00:21:41.884 }, 00:21:41.884 "peer_address": { 00:21:41.884 "trtype": "TCP", 00:21:41.884 "adrfam": "IPv4", 00:21:41.884 "traddr": "10.0.0.1", 00:21:41.884 "trsvcid": "53086" 00:21:41.884 }, 00:21:41.884 "auth": { 00:21:41.884 "state": "completed", 00:21:41.884 "digest": "sha256", 00:21:41.884 "dhgroup": "ffdhe3072" 00:21:41.884 } 00:21:41.884 } 00:21:41.884 ]' 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.884 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.142 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.142 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.142 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.142 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:42.710 11:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.969 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.227 00:21:43.227 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.227 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.227 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.485 { 00:21:43.485 "cntlid": 19, 00:21:43.485 "qid": 0, 00:21:43.485 "state": "enabled", 00:21:43.485 "thread": "nvmf_tgt_poll_group_000", 00:21:43.485 "listen_address": { 00:21:43.485 "trtype": "TCP", 00:21:43.485 "adrfam": "IPv4", 00:21:43.485 "traddr": "10.0.0.2", 00:21:43.485 "trsvcid": "4420" 00:21:43.485 }, 00:21:43.485 "peer_address": { 00:21:43.485 "trtype": "TCP", 00:21:43.485 "adrfam": "IPv4", 00:21:43.485 "traddr": "10.0.0.1", 00:21:43.485 "trsvcid": "53116" 00:21:43.485 }, 00:21:43.485 "auth": { 00:21:43.485 "state": "completed", 00:21:43.485 "digest": "sha256", 00:21:43.485 "dhgroup": "ffdhe3072" 00:21:43.485 } 00:21:43.485 } 00:21:43.485 ]' 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.485 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.742 11:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.308 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.566 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.825 00:21:44.825 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.825 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.825 11:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.825 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.825 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.825 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.825 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.825 11:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.825 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.825 { 00:21:44.825 "cntlid": 21, 00:21:44.825 "qid": 0, 00:21:44.825 "state": "enabled", 00:21:44.825 "thread": "nvmf_tgt_poll_group_000", 00:21:44.825 "listen_address": { 00:21:44.825 "trtype": "TCP", 00:21:44.825 "adrfam": "IPv4", 00:21:44.825 "traddr": "10.0.0.2", 00:21:44.825 "trsvcid": "4420" 00:21:44.825 }, 00:21:44.825 "peer_address": { 00:21:44.825 "trtype": "TCP", 00:21:44.825 "adrfam": "IPv4", 00:21:44.825 "traddr": "10.0.0.1", 00:21:44.825 "trsvcid": "53152" 00:21:44.825 }, 00:21:44.825 "auth": { 00:21:44.825 "state": "completed", 00:21:44.825 "digest": "sha256", 00:21:44.825 "dhgroup": "ffdhe3072" 00:21:44.825 } 00:21:44.825 } 00:21:44.825 ]' 00:21:44.825 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.084 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.084 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.084 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.084 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.084 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.084 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.084 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.343 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:21:45.910 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.910 11:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.910 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.911 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.911 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.168 00:21:46.168 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.168 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.168 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.427 { 00:21:46.427 "cntlid": 23, 00:21:46.427 "qid": 0, 00:21:46.427 "state": "enabled", 00:21:46.427 "thread": "nvmf_tgt_poll_group_000", 00:21:46.427 "listen_address": { 00:21:46.427 "trtype": "TCP", 00:21:46.427 "adrfam": "IPv4", 00:21:46.427 "traddr": "10.0.0.2", 00:21:46.427 "trsvcid": "4420" 00:21:46.427 }, 00:21:46.427 "peer_address": { 00:21:46.427 "trtype": "TCP", 00:21:46.427 "adrfam": "IPv4", 00:21:46.427 "traddr": "10.0.0.1", 00:21:46.427 "trsvcid": "53188" 00:21:46.427 }, 00:21:46.427 "auth": { 00:21:46.427 "state": "completed", 00:21:46.427 "digest": "sha256", 00:21:46.427 "dhgroup": "ffdhe3072" 00:21:46.427 } 00:21:46.427 } 00:21:46.427 ]' 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.427 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.687 11:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:47.254 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.513 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.514 11:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.514 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.514 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.773 00:21:47.773 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.773 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.773 11:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.773 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.031 { 00:21:48.031 "cntlid": 25, 00:21:48.031 "qid": 0, 00:21:48.031 "state": "enabled", 00:21:48.031 "thread": "nvmf_tgt_poll_group_000", 00:21:48.031 "listen_address": { 00:21:48.031 "trtype": "TCP", 00:21:48.031 "adrfam": "IPv4", 00:21:48.031 "traddr": "10.0.0.2", 00:21:48.031 "trsvcid": "4420" 00:21:48.031 }, 00:21:48.031 "peer_address": { 00:21:48.031 "trtype": "TCP", 00:21:48.031 "adrfam": "IPv4", 00:21:48.031 "traddr": "10.0.0.1", 00:21:48.031 "trsvcid": "53206" 00:21:48.031 }, 00:21:48.031 "auth": { 00:21:48.031 "state": "completed", 00:21:48.031 "digest": "sha256", 00:21:48.031 "dhgroup": "ffdhe4096" 00:21:48.031 } 00:21:48.031 } 00:21:48.031 ]' 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.031 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.290 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:21:48.859 11:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.859 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.118 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.376 { 00:21:49.376 "cntlid": 27, 00:21:49.376 "qid": 0, 00:21:49.376 "state": "enabled", 00:21:49.376 "thread": "nvmf_tgt_poll_group_000", 00:21:49.376 "listen_address": { 00:21:49.376 "trtype": "TCP", 00:21:49.376 "adrfam": "IPv4", 00:21:49.376 "traddr": "10.0.0.2", 00:21:49.376 "trsvcid": "4420" 00:21:49.376 }, 00:21:49.376 "peer_address": { 00:21:49.376 "trtype": "TCP", 00:21:49.376 "adrfam": "IPv4", 00:21:49.376 "traddr": "10.0.0.1", 00:21:49.376 "trsvcid": "53232" 00:21:49.376 }, 00:21:49.376 "auth": { 00:21:49.376 "state": "completed", 00:21:49.376 "digest": "sha256", 00:21:49.376 "dhgroup": "ffdhe4096" 00:21:49.376 } 00:21:49.376 } 00:21:49.376 ]' 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.376 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.635 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.635 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.635 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.635 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.635 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.635 11:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:50.203 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.462 11:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.721 00:21:50.721 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.721 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.721 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.980 { 00:21:50.980 "cntlid": 29, 00:21:50.980 "qid": 0, 00:21:50.980 "state": "enabled", 00:21:50.980 "thread": "nvmf_tgt_poll_group_000", 00:21:50.980 "listen_address": { 00:21:50.980 "trtype": "TCP", 00:21:50.980 "adrfam": "IPv4", 00:21:50.980 "traddr": "10.0.0.2", 00:21:50.980 "trsvcid": "4420" 00:21:50.980 }, 00:21:50.980 "peer_address": { 00:21:50.980 "trtype": "TCP", 00:21:50.980 "adrfam": "IPv4", 00:21:50.980 "traddr": "10.0.0.1", 00:21:50.980 "trsvcid": "58744" 00:21:50.980 }, 00:21:50.980 "auth": { 00:21:50.980 "state": "completed", 00:21:50.980 "digest": "sha256", 00:21:50.980 "dhgroup": "ffdhe4096" 00:21:50.980 } 00:21:50.980 } 00:21:50.980 ]' 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.980 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.238 11:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:21:51.805 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.805 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.805 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.805 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.805 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.805 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.805 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:51.806 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.064 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.323 00:21:52.323 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.323 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.323 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.582 { 00:21:52.582 "cntlid": 31, 00:21:52.582 "qid": 0, 00:21:52.582 "state": "enabled", 00:21:52.582 "thread": "nvmf_tgt_poll_group_000", 00:21:52.582 "listen_address": { 00:21:52.582 "trtype": "TCP", 00:21:52.582 "adrfam": "IPv4", 00:21:52.582 "traddr": "10.0.0.2", 00:21:52.582 "trsvcid": "4420" 00:21:52.582 }, 00:21:52.582 "peer_address": { 00:21:52.582 "trtype": "TCP", 00:21:52.582 "adrfam": "IPv4", 00:21:52.582 "traddr": "10.0.0.1", 00:21:52.582 "trsvcid": "58780" 00:21:52.582 }, 00:21:52.582 "auth": { 00:21:52.582 "state": "completed", 00:21:52.582 "digest": "sha256", 00:21:52.582 "dhgroup": "ffdhe4096" 00:21:52.582 } 00:21:52.582 } 00:21:52.582 ]' 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.582 11:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.853 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.498 11:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.757 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.016 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.016 { 00:21:54.016 "cntlid": 33, 00:21:54.016 "qid": 0, 00:21:54.016 "state": "enabled", 00:21:54.016 "thread": "nvmf_tgt_poll_group_000", 00:21:54.016 "listen_address": { 00:21:54.016 "trtype": "TCP", 00:21:54.016 "adrfam": "IPv4", 00:21:54.016 "traddr": "10.0.0.2", 00:21:54.016 "trsvcid": "4420" 00:21:54.016 }, 00:21:54.016 "peer_address": { 00:21:54.016 "trtype": "TCP", 00:21:54.017 "adrfam": "IPv4", 00:21:54.017 "traddr": "10.0.0.1", 00:21:54.017 "trsvcid": "58814" 00:21:54.017 }, 00:21:54.017 "auth": { 00:21:54.017 "state": "completed", 00:21:54.017 "digest": "sha256", 00:21:54.017 "dhgroup": "ffdhe6144" 00:21:54.017 } 00:21:54.017 } 00:21:54.017 ]' 00:21:54.017 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.017 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.017 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.276 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.276 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.276 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.276 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.276 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.276 11:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:54.844 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.103 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.362 00:21:55.362 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.362 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.362 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.621 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.621 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.621 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.621 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.621 11:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.621 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.621 { 00:21:55.621 "cntlid": 35, 00:21:55.622 "qid": 0, 00:21:55.622 "state": "enabled", 00:21:55.622 "thread": "nvmf_tgt_poll_group_000", 00:21:55.622 "listen_address": { 00:21:55.622 "trtype": "TCP", 00:21:55.622 "adrfam": "IPv4", 00:21:55.622 "traddr": "10.0.0.2", 00:21:55.622 "trsvcid": "4420" 00:21:55.622 }, 00:21:55.622 "peer_address": { 00:21:55.622 "trtype": "TCP", 00:21:55.622 "adrfam": "IPv4", 00:21:55.622 "traddr": "10.0.0.1", 00:21:55.622 "trsvcid": "58848" 00:21:55.622 }, 00:21:55.622 "auth": { 00:21:55.622 "state": "completed", 00:21:55.622 "digest": "sha256", 00:21:55.622 "dhgroup": "ffdhe6144" 00:21:55.622 } 00:21:55.622 } 00:21:55.622 ]' 00:21:55.622 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.622 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.622 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.880 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.880 11:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.880 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.880 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.880 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.880 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:21:56.447 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.705 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:56.705 11:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.705 11:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.705 11:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.705 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.705 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.705 11:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.963 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.220 00:21:57.221 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.221 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.221 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.479 { 00:21:57.479 "cntlid": 37, 00:21:57.479 "qid": 0, 00:21:57.479 "state": "enabled", 00:21:57.479 "thread": "nvmf_tgt_poll_group_000", 00:21:57.479 "listen_address": { 00:21:57.479 "trtype": "TCP", 00:21:57.479 "adrfam": "IPv4", 00:21:57.479 "traddr": "10.0.0.2", 00:21:57.479 "trsvcid": "4420" 00:21:57.479 }, 00:21:57.479 "peer_address": { 00:21:57.479 "trtype": "TCP", 00:21:57.479 "adrfam": "IPv4", 00:21:57.479 "traddr": "10.0.0.1", 00:21:57.479 "trsvcid": "58894" 00:21:57.479 }, 00:21:57.479 "auth": { 00:21:57.479 "state": "completed", 00:21:57.479 "digest": "sha256", 00:21:57.479 "dhgroup": "ffdhe6144" 00:21:57.479 } 00:21:57.479 } 00:21:57.479 ]' 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.479 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.737 11:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:21:58.302 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.302 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:58.302 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.302 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.303 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.869 00:21:58.869 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.869 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.869 11:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.869 { 00:21:58.869 "cntlid": 39, 00:21:58.869 "qid": 0, 00:21:58.869 "state": "enabled", 00:21:58.869 "thread": "nvmf_tgt_poll_group_000", 00:21:58.869 "listen_address": { 00:21:58.869 "trtype": "TCP", 00:21:58.869 "adrfam": "IPv4", 00:21:58.869 "traddr": "10.0.0.2", 00:21:58.869 "trsvcid": "4420" 00:21:58.869 }, 00:21:58.869 "peer_address": { 00:21:58.869 "trtype": "TCP", 00:21:58.869 "adrfam": "IPv4", 00:21:58.869 "traddr": "10.0.0.1", 00:21:58.869 "trsvcid": "58916" 00:21:58.869 }, 00:21:58.869 "auth": { 00:21:58.869 "state": "completed", 00:21:58.869 "digest": "sha256", 00:21:58.869 "dhgroup": "ffdhe6144" 00:21:58.869 } 00:21:58.869 } 00:21:58.869 ]' 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:58.869 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.127 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.127 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.127 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.127 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.127 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.127 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:21:59.693 11:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:59.693 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:59.950 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:59.950 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.950 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:59.950 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.950 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.950 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.950 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.951 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.951 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.951 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.951 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.951 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.516 00:22:00.516 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.516 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.516 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.516 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.516 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.516 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.516 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.773 11:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.773 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.773 { 00:22:00.773 "cntlid": 41, 00:22:00.773 "qid": 0, 00:22:00.773 "state": "enabled", 00:22:00.773 "thread": "nvmf_tgt_poll_group_000", 00:22:00.773 "listen_address": { 00:22:00.773 "trtype": "TCP", 00:22:00.773 "adrfam": "IPv4", 00:22:00.773 "traddr": "10.0.0.2", 00:22:00.773 "trsvcid": "4420" 00:22:00.773 }, 00:22:00.773 "peer_address": { 00:22:00.773 "trtype": "TCP", 00:22:00.773 "adrfam": "IPv4", 00:22:00.773 "traddr": "10.0.0.1", 00:22:00.773 "trsvcid": "58834" 00:22:00.773 }, 00:22:00.773 "auth": { 00:22:00.773 "state": "completed", 00:22:00.773 "digest": "sha256", 00:22:00.773 "dhgroup": "ffdhe8192" 00:22:00.773 } 00:22:00.773 } 00:22:00.774 ]' 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.774 11:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.032 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.597 11:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.164 00:22:02.164 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.164 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.164 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.422 { 00:22:02.422 "cntlid": 43, 00:22:02.422 "qid": 0, 00:22:02.422 "state": "enabled", 00:22:02.422 "thread": "nvmf_tgt_poll_group_000", 00:22:02.422 "listen_address": { 00:22:02.422 "trtype": "TCP", 00:22:02.422 "adrfam": "IPv4", 00:22:02.422 "traddr": "10.0.0.2", 00:22:02.422 "trsvcid": "4420" 00:22:02.422 }, 00:22:02.422 "peer_address": { 00:22:02.422 "trtype": "TCP", 00:22:02.422 "adrfam": "IPv4", 00:22:02.422 "traddr": "10.0.0.1", 00:22:02.422 "trsvcid": "58852" 00:22:02.422 }, 00:22:02.422 "auth": { 00:22:02.422 "state": "completed", 00:22:02.422 "digest": "sha256", 00:22:02.422 "dhgroup": "ffdhe8192" 00:22:02.422 } 00:22:02.422 } 00:22:02.422 ]' 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.422 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.680 11:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:03.245 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.503 11:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.069 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.069 { 00:22:04.069 "cntlid": 45, 00:22:04.069 "qid": 0, 00:22:04.069 "state": "enabled", 00:22:04.069 "thread": "nvmf_tgt_poll_group_000", 00:22:04.069 "listen_address": { 00:22:04.069 "trtype": "TCP", 00:22:04.069 "adrfam": "IPv4", 00:22:04.069 "traddr": "10.0.0.2", 00:22:04.069 "trsvcid": "4420" 00:22:04.069 }, 00:22:04.069 "peer_address": { 00:22:04.069 "trtype": "TCP", 00:22:04.069 "adrfam": "IPv4", 00:22:04.069 "traddr": "10.0.0.1", 00:22:04.069 "trsvcid": "58864" 00:22:04.069 }, 00:22:04.069 "auth": { 00:22:04.069 "state": "completed", 00:22:04.069 "digest": "sha256", 00:22:04.069 "dhgroup": "ffdhe8192" 00:22:04.069 } 00:22:04.069 } 00:22:04.069 ]' 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.069 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.327 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.327 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.327 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.327 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.327 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.585 11:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.150 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.714 00:22:05.714 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.714 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.714 11:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.972 { 00:22:05.972 "cntlid": 47, 00:22:05.972 "qid": 0, 00:22:05.972 "state": "enabled", 00:22:05.972 "thread": "nvmf_tgt_poll_group_000", 00:22:05.972 "listen_address": { 00:22:05.972 "trtype": "TCP", 00:22:05.972 "adrfam": "IPv4", 00:22:05.972 "traddr": "10.0.0.2", 00:22:05.972 "trsvcid": "4420" 00:22:05.972 }, 00:22:05.972 "peer_address": { 00:22:05.972 "trtype": "TCP", 00:22:05.972 "adrfam": "IPv4", 00:22:05.972 "traddr": "10.0.0.1", 00:22:05.972 "trsvcid": "58900" 00:22:05.972 }, 00:22:05.972 "auth": { 00:22:05.972 "state": "completed", 00:22:05.972 "digest": "sha256", 00:22:05.972 "dhgroup": "ffdhe8192" 00:22:05.972 } 00:22:05.972 } 00:22:05.972 ]' 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.972 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.230 11:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:06.794 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.051 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.308 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.308 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.308 { 00:22:07.308 "cntlid": 49, 00:22:07.308 "qid": 0, 00:22:07.308 "state": "enabled", 00:22:07.308 "thread": "nvmf_tgt_poll_group_000", 00:22:07.308 "listen_address": { 00:22:07.308 "trtype": "TCP", 00:22:07.308 "adrfam": "IPv4", 00:22:07.308 "traddr": "10.0.0.2", 00:22:07.308 "trsvcid": "4420" 00:22:07.308 }, 00:22:07.308 "peer_address": { 00:22:07.308 "trtype": "TCP", 00:22:07.308 "adrfam": "IPv4", 00:22:07.308 "traddr": "10.0.0.1", 00:22:07.308 "trsvcid": "58928" 00:22:07.308 }, 00:22:07.308 "auth": { 00:22:07.308 "state": "completed", 00:22:07.308 "digest": "sha384", 00:22:07.308 "dhgroup": "null" 00:22:07.308 } 00:22:07.308 } 00:22:07.308 ]' 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.565 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.821 11:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.383 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.639 11:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.639 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.639 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.639 00:22:08.639 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.639 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.639 11:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.896 { 00:22:08.896 "cntlid": 51, 00:22:08.896 "qid": 0, 00:22:08.896 "state": "enabled", 00:22:08.896 "thread": "nvmf_tgt_poll_group_000", 00:22:08.896 "listen_address": { 00:22:08.896 "trtype": "TCP", 00:22:08.896 "adrfam": "IPv4", 00:22:08.896 "traddr": "10.0.0.2", 00:22:08.896 "trsvcid": "4420" 00:22:08.896 }, 00:22:08.896 "peer_address": { 00:22:08.896 "trtype": "TCP", 00:22:08.896 "adrfam": "IPv4", 00:22:08.896 "traddr": "10.0.0.1", 00:22:08.896 "trsvcid": "58970" 00:22:08.896 }, 00:22:08.896 "auth": { 00:22:08.896 "state": "completed", 00:22:08.896 "digest": "sha384", 00:22:08.896 "dhgroup": "null" 00:22:08.896 } 00:22:08.896 } 00:22:08.896 ]' 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.896 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.154 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:09.154 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.154 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.154 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.154 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.154 11:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:09.718 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.975 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.232 00:22:10.232 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.232 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.232 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.489 { 00:22:10.489 "cntlid": 53, 00:22:10.489 "qid": 0, 00:22:10.489 "state": "enabled", 00:22:10.489 "thread": "nvmf_tgt_poll_group_000", 00:22:10.489 "listen_address": { 00:22:10.489 "trtype": "TCP", 00:22:10.489 "adrfam": "IPv4", 00:22:10.489 "traddr": "10.0.0.2", 00:22:10.489 "trsvcid": "4420" 00:22:10.489 }, 00:22:10.489 "peer_address": { 00:22:10.489 "trtype": "TCP", 00:22:10.489 "adrfam": "IPv4", 00:22:10.489 "traddr": "10.0.0.1", 00:22:10.489 "trsvcid": "48714" 00:22:10.489 }, 00:22:10.489 "auth": { 00:22:10.489 "state": "completed", 00:22:10.489 "digest": "sha384", 00:22:10.489 "dhgroup": "null" 00:22:10.489 } 00:22:10.489 } 00:22:10.489 ]' 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.489 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.746 11:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:11.318 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.578 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.834 00:22:11.834 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.834 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.834 11:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.834 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.834 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.834 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.835 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.835 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.835 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.835 { 00:22:11.835 "cntlid": 55, 00:22:11.835 "qid": 0, 00:22:11.835 "state": "enabled", 00:22:11.835 "thread": "nvmf_tgt_poll_group_000", 00:22:11.835 "listen_address": { 00:22:11.835 "trtype": "TCP", 00:22:11.835 "adrfam": "IPv4", 00:22:11.835 "traddr": "10.0.0.2", 00:22:11.835 "trsvcid": "4420" 00:22:11.835 }, 00:22:11.835 "peer_address": { 00:22:11.835 "trtype": "TCP", 00:22:11.835 "adrfam": "IPv4", 00:22:11.835 "traddr": "10.0.0.1", 00:22:11.835 "trsvcid": "48740" 00:22:11.835 }, 00:22:11.835 "auth": { 00:22:11.835 "state": "completed", 00:22:11.835 "digest": "sha384", 00:22:11.835 "dhgroup": "null" 00:22:11.835 } 00:22:11.835 } 00:22:11.835 ]' 00:22:11.835 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.092 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:12.092 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.092 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:12.092 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.092 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.092 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.092 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.349 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:12.913 11:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.913 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.914 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.914 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.171 00:22:13.171 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.171 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.171 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.429 { 00:22:13.429 "cntlid": 57, 00:22:13.429 "qid": 0, 00:22:13.429 "state": "enabled", 00:22:13.429 "thread": "nvmf_tgt_poll_group_000", 00:22:13.429 "listen_address": { 00:22:13.429 "trtype": "TCP", 00:22:13.429 "adrfam": "IPv4", 00:22:13.429 "traddr": "10.0.0.2", 00:22:13.429 "trsvcid": "4420" 00:22:13.429 }, 00:22:13.429 "peer_address": { 00:22:13.429 "trtype": "TCP", 00:22:13.429 "adrfam": "IPv4", 00:22:13.429 "traddr": "10.0.0.1", 00:22:13.429 "trsvcid": "48762" 00:22:13.429 }, 00:22:13.429 "auth": { 00:22:13.429 "state": "completed", 00:22:13.429 "digest": "sha384", 00:22:13.429 "dhgroup": "ffdhe2048" 00:22:13.429 } 00:22:13.429 } 00:22:13.429 ]' 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.429 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.687 11:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:14.253 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.512 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.770 00:22:14.770 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.770 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.770 11:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.770 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.770 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.770 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.770 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.770 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.770 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.770 { 00:22:14.770 "cntlid": 59, 00:22:14.770 "qid": 0, 00:22:14.770 "state": "enabled", 00:22:14.770 "thread": "nvmf_tgt_poll_group_000", 00:22:14.770 "listen_address": { 00:22:14.770 "trtype": "TCP", 00:22:14.770 "adrfam": "IPv4", 00:22:14.770 "traddr": "10.0.0.2", 00:22:14.770 "trsvcid": "4420" 00:22:14.770 }, 00:22:14.770 "peer_address": { 00:22:14.770 "trtype": "TCP", 00:22:14.770 "adrfam": "IPv4", 00:22:14.770 "traddr": "10.0.0.1", 00:22:14.770 "trsvcid": "48774" 00:22:14.770 }, 00:22:14.770 "auth": { 00:22:14.770 "state": "completed", 00:22:14.770 "digest": "sha384", 00:22:14.770 "dhgroup": "ffdhe2048" 00:22:14.770 } 00:22:14.770 } 00:22:14.770 ]' 00:22:14.770 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.028 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.028 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.028 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:15.028 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.028 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.028 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.028 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.286 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:15.851 11:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.851 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.110 00:22:16.110 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.110 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.110 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.367 { 00:22:16.367 "cntlid": 61, 00:22:16.367 "qid": 0, 00:22:16.367 "state": "enabled", 00:22:16.367 "thread": "nvmf_tgt_poll_group_000", 00:22:16.367 "listen_address": { 00:22:16.367 "trtype": "TCP", 00:22:16.367 "adrfam": "IPv4", 00:22:16.367 "traddr": "10.0.0.2", 00:22:16.367 "trsvcid": "4420" 00:22:16.367 }, 00:22:16.367 "peer_address": { 00:22:16.367 "trtype": "TCP", 00:22:16.367 "adrfam": "IPv4", 00:22:16.367 "traddr": "10.0.0.1", 00:22:16.367 "trsvcid": "48802" 00:22:16.367 }, 00:22:16.367 "auth": { 00:22:16.367 "state": "completed", 00:22:16.367 "digest": "sha384", 00:22:16.367 "dhgroup": "ffdhe2048" 00:22:16.367 } 00:22:16.367 } 00:22:16.367 ]' 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.367 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.624 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.624 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.624 11:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.223 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.480 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.738 00:22:17.738 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.738 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.738 11:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.025 { 00:22:18.025 "cntlid": 63, 00:22:18.025 "qid": 0, 00:22:18.025 "state": "enabled", 00:22:18.025 "thread": "nvmf_tgt_poll_group_000", 00:22:18.025 "listen_address": { 00:22:18.025 "trtype": "TCP", 00:22:18.025 "adrfam": "IPv4", 00:22:18.025 "traddr": "10.0.0.2", 00:22:18.025 "trsvcid": "4420" 00:22:18.025 }, 00:22:18.025 "peer_address": { 00:22:18.025 "trtype": "TCP", 00:22:18.025 "adrfam": "IPv4", 00:22:18.025 "traddr": "10.0.0.1", 00:22:18.025 "trsvcid": "48834" 00:22:18.025 }, 00:22:18.025 "auth": { 00:22:18.025 "state": "completed", 00:22:18.025 "digest": "sha384", 00:22:18.025 "dhgroup": "ffdhe2048" 00:22:18.025 } 00:22:18.025 } 00:22:18.025 ]' 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.025 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.282 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:18.848 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.848 11:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:18.848 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.848 11:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.848 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.105 00:22:19.105 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.105 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.105 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.363 { 00:22:19.363 "cntlid": 65, 00:22:19.363 "qid": 0, 00:22:19.363 "state": "enabled", 00:22:19.363 "thread": "nvmf_tgt_poll_group_000", 00:22:19.363 "listen_address": { 00:22:19.363 "trtype": "TCP", 00:22:19.363 "adrfam": "IPv4", 00:22:19.363 "traddr": "10.0.0.2", 00:22:19.363 "trsvcid": "4420" 00:22:19.363 }, 00:22:19.363 "peer_address": { 00:22:19.363 "trtype": "TCP", 00:22:19.363 "adrfam": "IPv4", 00:22:19.363 "traddr": "10.0.0.1", 00:22:19.363 "trsvcid": "48854" 00:22:19.363 }, 00:22:19.363 "auth": { 00:22:19.363 "state": "completed", 00:22:19.363 "digest": "sha384", 00:22:19.363 "dhgroup": "ffdhe3072" 00:22:19.363 } 00:22:19.363 } 00:22:19.363 ]' 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:19.363 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.621 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.621 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.621 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.622 11:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:20.187 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.446 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.704 00:22:20.704 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.704 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.704 11:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.962 { 00:22:20.962 "cntlid": 67, 00:22:20.962 "qid": 0, 00:22:20.962 "state": "enabled", 00:22:20.962 "thread": "nvmf_tgt_poll_group_000", 00:22:20.962 "listen_address": { 00:22:20.962 "trtype": "TCP", 00:22:20.962 "adrfam": "IPv4", 00:22:20.962 "traddr": "10.0.0.2", 00:22:20.962 "trsvcid": "4420" 00:22:20.962 }, 00:22:20.962 "peer_address": { 00:22:20.962 "trtype": "TCP", 00:22:20.962 "adrfam": "IPv4", 00:22:20.962 "traddr": "10.0.0.1", 00:22:20.962 "trsvcid": "56432" 00:22:20.962 }, 00:22:20.962 "auth": { 00:22:20.962 "state": "completed", 00:22:20.962 "digest": "sha384", 00:22:20.962 "dhgroup": "ffdhe3072" 00:22:20.962 } 00:22:20.962 } 00:22:20.962 ]' 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.962 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.220 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:21.786 11:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.044 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.301 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.301 { 00:22:22.301 "cntlid": 69, 00:22:22.301 "qid": 0, 00:22:22.301 "state": "enabled", 00:22:22.301 "thread": "nvmf_tgt_poll_group_000", 00:22:22.301 "listen_address": { 00:22:22.301 "trtype": "TCP", 00:22:22.301 "adrfam": "IPv4", 00:22:22.301 "traddr": "10.0.0.2", 00:22:22.301 "trsvcid": "4420" 00:22:22.301 }, 00:22:22.301 "peer_address": { 00:22:22.301 "trtype": "TCP", 00:22:22.301 "adrfam": "IPv4", 00:22:22.301 "traddr": "10.0.0.1", 00:22:22.301 "trsvcid": "56478" 00:22:22.301 }, 00:22:22.301 "auth": { 00:22:22.301 "state": "completed", 00:22:22.301 "digest": "sha384", 00:22:22.301 "dhgroup": "ffdhe3072" 00:22:22.301 } 00:22:22.301 } 00:22:22.301 ]' 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.301 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.559 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:22.559 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.559 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.559 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.559 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.559 11:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:23.124 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.381 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.638 00:22:23.638 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.638 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.638 11:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.896 { 00:22:23.896 "cntlid": 71, 00:22:23.896 "qid": 0, 00:22:23.896 "state": "enabled", 00:22:23.896 "thread": "nvmf_tgt_poll_group_000", 00:22:23.896 "listen_address": { 00:22:23.896 "trtype": "TCP", 00:22:23.896 "adrfam": "IPv4", 00:22:23.896 "traddr": "10.0.0.2", 00:22:23.896 "trsvcid": "4420" 00:22:23.896 }, 00:22:23.896 "peer_address": { 00:22:23.896 "trtype": "TCP", 00:22:23.896 "adrfam": "IPv4", 00:22:23.896 "traddr": "10.0.0.1", 00:22:23.896 "trsvcid": "56510" 00:22:23.896 }, 00:22:23.896 "auth": { 00:22:23.896 "state": "completed", 00:22:23.896 "digest": "sha384", 00:22:23.896 "dhgroup": "ffdhe3072" 00:22:23.896 } 00:22:23.896 } 00:22:23.896 ]' 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.896 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.154 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:24.720 11:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.978 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.236 00:22:25.236 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.236 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.236 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.506 { 00:22:25.506 "cntlid": 73, 00:22:25.506 "qid": 0, 00:22:25.506 "state": "enabled", 00:22:25.506 "thread": "nvmf_tgt_poll_group_000", 00:22:25.506 "listen_address": { 00:22:25.506 "trtype": "TCP", 00:22:25.506 "adrfam": "IPv4", 00:22:25.506 "traddr": "10.0.0.2", 00:22:25.506 "trsvcid": "4420" 00:22:25.506 }, 00:22:25.506 "peer_address": { 00:22:25.506 "trtype": "TCP", 00:22:25.506 "adrfam": "IPv4", 00:22:25.506 "traddr": "10.0.0.1", 00:22:25.506 "trsvcid": "56544" 00:22:25.506 }, 00:22:25.506 "auth": { 00:22:25.506 "state": "completed", 00:22:25.506 "digest": "sha384", 00:22:25.506 "dhgroup": "ffdhe4096" 00:22:25.506 } 00:22:25.506 } 00:22:25.506 ]' 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.506 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.764 11:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:26.329 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.587 11:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.845 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.845 { 00:22:26.845 "cntlid": 75, 00:22:26.845 "qid": 0, 00:22:26.845 "state": "enabled", 00:22:26.845 "thread": "nvmf_tgt_poll_group_000", 00:22:26.845 "listen_address": { 00:22:26.845 "trtype": "TCP", 00:22:26.845 "adrfam": "IPv4", 00:22:26.845 "traddr": "10.0.0.2", 00:22:26.845 "trsvcid": "4420" 00:22:26.845 }, 00:22:26.845 "peer_address": { 00:22:26.845 "trtype": "TCP", 00:22:26.845 "adrfam": "IPv4", 00:22:26.845 "traddr": "10.0.0.1", 00:22:26.845 "trsvcid": "56580" 00:22:26.845 }, 00:22:26.845 "auth": { 00:22:26.845 "state": "completed", 00:22:26.845 "digest": "sha384", 00:22:26.845 "dhgroup": "ffdhe4096" 00:22:26.845 } 00:22:26.845 } 00:22:26.845 ]' 00:22:26.845 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.103 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.103 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.103 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:27.103 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.103 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.103 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.103 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.360 11:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.927 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.186 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.186 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.186 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.186 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.445 { 00:22:28.445 "cntlid": 77, 00:22:28.445 "qid": 0, 00:22:28.445 "state": "enabled", 00:22:28.445 "thread": "nvmf_tgt_poll_group_000", 00:22:28.445 "listen_address": { 00:22:28.445 "trtype": "TCP", 00:22:28.445 "adrfam": "IPv4", 00:22:28.445 "traddr": "10.0.0.2", 00:22:28.445 "trsvcid": "4420" 00:22:28.445 }, 00:22:28.445 "peer_address": { 00:22:28.445 "trtype": "TCP", 00:22:28.445 "adrfam": "IPv4", 00:22:28.445 "traddr": "10.0.0.1", 00:22:28.445 "trsvcid": "56622" 00:22:28.445 }, 00:22:28.445 "auth": { 00:22:28.445 "state": "completed", 00:22:28.445 "digest": "sha384", 00:22:28.445 "dhgroup": "ffdhe4096" 00:22:28.445 } 00:22:28.445 } 00:22:28.445 ]' 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.445 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.446 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.705 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:28.705 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.705 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.705 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.705 11:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.705 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:29.272 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.531 11:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.790 00:22:29.790 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.790 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.790 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.048 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.048 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.048 11:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.048 11:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.048 11:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.048 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.048 { 00:22:30.048 "cntlid": 79, 00:22:30.048 "qid": 0, 00:22:30.048 "state": "enabled", 00:22:30.048 "thread": "nvmf_tgt_poll_group_000", 00:22:30.048 "listen_address": { 00:22:30.048 "trtype": "TCP", 00:22:30.048 "adrfam": "IPv4", 00:22:30.048 "traddr": "10.0.0.2", 00:22:30.049 "trsvcid": "4420" 00:22:30.049 }, 00:22:30.049 "peer_address": { 00:22:30.049 "trtype": "TCP", 00:22:30.049 "adrfam": "IPv4", 00:22:30.049 "traddr": "10.0.0.1", 00:22:30.049 "trsvcid": "56644" 00:22:30.049 }, 00:22:30.049 "auth": { 00:22:30.049 "state": "completed", 00:22:30.049 "digest": "sha384", 00:22:30.049 "dhgroup": "ffdhe4096" 00:22:30.049 } 00:22:30.049 } 00:22:30.049 ]' 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.049 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.307 11:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:30.874 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.134 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.393 00:22:31.393 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.393 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.393 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.652 { 00:22:31.652 "cntlid": 81, 00:22:31.652 "qid": 0, 00:22:31.652 "state": "enabled", 00:22:31.652 "thread": "nvmf_tgt_poll_group_000", 00:22:31.652 "listen_address": { 00:22:31.652 "trtype": "TCP", 00:22:31.652 "adrfam": "IPv4", 00:22:31.652 "traddr": "10.0.0.2", 00:22:31.652 "trsvcid": "4420" 00:22:31.652 }, 00:22:31.652 "peer_address": { 00:22:31.652 "trtype": "TCP", 00:22:31.652 "adrfam": "IPv4", 00:22:31.652 "traddr": "10.0.0.1", 00:22:31.652 "trsvcid": "59752" 00:22:31.652 }, 00:22:31.652 "auth": { 00:22:31.652 "state": "completed", 00:22:31.652 "digest": "sha384", 00:22:31.652 "dhgroup": "ffdhe6144" 00:22:31.652 } 00:22:31.652 } 00:22:31.652 ]' 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.652 11:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.910 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:32.477 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.736 11:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.995 00:22:32.995 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.995 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.995 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.254 { 00:22:33.254 "cntlid": 83, 00:22:33.254 "qid": 0, 00:22:33.254 "state": "enabled", 00:22:33.254 "thread": "nvmf_tgt_poll_group_000", 00:22:33.254 "listen_address": { 00:22:33.254 "trtype": "TCP", 00:22:33.254 "adrfam": "IPv4", 00:22:33.254 "traddr": "10.0.0.2", 00:22:33.254 "trsvcid": "4420" 00:22:33.254 }, 00:22:33.254 "peer_address": { 00:22:33.254 "trtype": "TCP", 00:22:33.254 "adrfam": "IPv4", 00:22:33.254 "traddr": "10.0.0.1", 00:22:33.254 "trsvcid": "59776" 00:22:33.254 }, 00:22:33.254 "auth": { 00:22:33.254 "state": "completed", 00:22:33.254 "digest": "sha384", 00:22:33.254 "dhgroup": "ffdhe6144" 00:22:33.254 } 00:22:33.254 } 00:22:33.254 ]' 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.254 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.535 11:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:34.101 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.359 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.617 00:22:34.617 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.617 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.617 11:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.875 { 00:22:34.875 "cntlid": 85, 00:22:34.875 "qid": 0, 00:22:34.875 "state": "enabled", 00:22:34.875 "thread": "nvmf_tgt_poll_group_000", 00:22:34.875 "listen_address": { 00:22:34.875 "trtype": "TCP", 00:22:34.875 "adrfam": "IPv4", 00:22:34.875 "traddr": "10.0.0.2", 00:22:34.875 "trsvcid": "4420" 00:22:34.875 }, 00:22:34.875 "peer_address": { 00:22:34.875 "trtype": "TCP", 00:22:34.875 "adrfam": "IPv4", 00:22:34.875 "traddr": "10.0.0.1", 00:22:34.875 "trsvcid": "59810" 00:22:34.875 }, 00:22:34.875 "auth": { 00:22:34.875 "state": "completed", 00:22:34.875 "digest": "sha384", 00:22:34.875 "dhgroup": "ffdhe6144" 00:22:34.875 } 00:22:34.875 } 00:22:34.875 ]' 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.875 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.876 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.134 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:35.701 11:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.959 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.218 00:22:36.218 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.218 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.218 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.477 { 00:22:36.477 "cntlid": 87, 00:22:36.477 "qid": 0, 00:22:36.477 "state": "enabled", 00:22:36.477 "thread": "nvmf_tgt_poll_group_000", 00:22:36.477 "listen_address": { 00:22:36.477 "trtype": "TCP", 00:22:36.477 "adrfam": "IPv4", 00:22:36.477 "traddr": "10.0.0.2", 00:22:36.477 "trsvcid": "4420" 00:22:36.477 }, 00:22:36.477 "peer_address": { 00:22:36.477 "trtype": "TCP", 00:22:36.477 "adrfam": "IPv4", 00:22:36.477 "traddr": "10.0.0.1", 00:22:36.477 "trsvcid": "59834" 00:22:36.477 }, 00:22:36.477 "auth": { 00:22:36.477 "state": "completed", 00:22:36.477 "digest": "sha384", 00:22:36.477 "dhgroup": "ffdhe6144" 00:22:36.477 } 00:22:36.477 } 00:22:36.477 ]' 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.477 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.734 11:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:37.298 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.299 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.557 11:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.123 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.123 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:38.123 { 00:22:38.123 "cntlid": 89, 00:22:38.123 "qid": 0, 00:22:38.123 "state": "enabled", 00:22:38.123 "thread": "nvmf_tgt_poll_group_000", 00:22:38.123 "listen_address": { 00:22:38.123 "trtype": "TCP", 00:22:38.123 "adrfam": "IPv4", 00:22:38.123 "traddr": "10.0.0.2", 00:22:38.123 "trsvcid": "4420" 00:22:38.123 }, 00:22:38.124 "peer_address": { 00:22:38.124 "trtype": "TCP", 00:22:38.124 "adrfam": "IPv4", 00:22:38.124 "traddr": "10.0.0.1", 00:22:38.124 "trsvcid": "59858" 00:22:38.124 }, 00:22:38.124 "auth": { 00:22:38.124 "state": "completed", 00:22:38.124 "digest": "sha384", 00:22:38.124 "dhgroup": "ffdhe8192" 00:22:38.124 } 00:22:38.124 } 00:22:38.124 ]' 00:22:38.124 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.124 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.124 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:38.124 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:38.381 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.381 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.381 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.381 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.381 11:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.948 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.206 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.773 00:22:39.773 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.773 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.773 11:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.031 { 00:22:40.031 "cntlid": 91, 00:22:40.031 "qid": 0, 00:22:40.031 "state": "enabled", 00:22:40.031 "thread": "nvmf_tgt_poll_group_000", 00:22:40.031 "listen_address": { 00:22:40.031 "trtype": "TCP", 00:22:40.031 "adrfam": "IPv4", 00:22:40.031 "traddr": "10.0.0.2", 00:22:40.031 "trsvcid": "4420" 00:22:40.031 }, 00:22:40.031 "peer_address": { 00:22:40.031 "trtype": "TCP", 00:22:40.031 "adrfam": "IPv4", 00:22:40.031 "traddr": "10.0.0.1", 00:22:40.031 "trsvcid": "59880" 00:22:40.031 }, 00:22:40.031 "auth": { 00:22:40.031 "state": "completed", 00:22:40.031 "digest": "sha384", 00:22:40.031 "dhgroup": "ffdhe8192" 00:22:40.031 } 00:22:40.031 } 00:22:40.031 ]' 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.031 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.289 11:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:40.884 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.154 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.412 00:22:41.412 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.413 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.413 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.671 { 00:22:41.671 "cntlid": 93, 00:22:41.671 "qid": 0, 00:22:41.671 "state": "enabled", 00:22:41.671 "thread": "nvmf_tgt_poll_group_000", 00:22:41.671 "listen_address": { 00:22:41.671 "trtype": "TCP", 00:22:41.671 "adrfam": "IPv4", 00:22:41.671 "traddr": "10.0.0.2", 00:22:41.671 "trsvcid": "4420" 00:22:41.671 }, 00:22:41.671 "peer_address": { 00:22:41.671 "trtype": "TCP", 00:22:41.671 "adrfam": "IPv4", 00:22:41.671 "traddr": "10.0.0.1", 00:22:41.671 "trsvcid": "56632" 00:22:41.671 }, 00:22:41.671 "auth": { 00:22:41.671 "state": "completed", 00:22:41.671 "digest": "sha384", 00:22:41.671 "dhgroup": "ffdhe8192" 00:22:41.671 } 00:22:41.671 } 00:22:41.671 ]' 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:41.671 11:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.671 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:41.671 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.930 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.930 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.930 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.930 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:42.498 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.757 11:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:43.324 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.324 { 00:22:43.324 "cntlid": 95, 00:22:43.324 "qid": 0, 00:22:43.324 "state": "enabled", 00:22:43.324 "thread": "nvmf_tgt_poll_group_000", 00:22:43.324 "listen_address": { 00:22:43.324 "trtype": "TCP", 00:22:43.324 "adrfam": "IPv4", 00:22:43.324 "traddr": "10.0.0.2", 00:22:43.324 "trsvcid": "4420" 00:22:43.324 }, 00:22:43.324 "peer_address": { 00:22:43.324 "trtype": "TCP", 00:22:43.324 "adrfam": "IPv4", 00:22:43.324 "traddr": "10.0.0.1", 00:22:43.324 "trsvcid": "56678" 00:22:43.324 }, 00:22:43.324 "auth": { 00:22:43.324 "state": "completed", 00:22:43.324 "digest": "sha384", 00:22:43.324 "dhgroup": "ffdhe8192" 00:22:43.324 } 00:22:43.324 } 00:22:43.324 ]' 00:22:43.324 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.582 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.582 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.582 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.582 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.582 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.582 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.582 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.840 11:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:44.407 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.408 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.667 00:22:44.667 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:44.667 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.667 11:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:44.926 { 00:22:44.926 "cntlid": 97, 00:22:44.926 "qid": 0, 00:22:44.926 "state": "enabled", 00:22:44.926 "thread": "nvmf_tgt_poll_group_000", 00:22:44.926 "listen_address": { 00:22:44.926 "trtype": "TCP", 00:22:44.926 "adrfam": "IPv4", 00:22:44.926 "traddr": "10.0.0.2", 00:22:44.926 "trsvcid": "4420" 00:22:44.926 }, 00:22:44.926 "peer_address": { 00:22:44.926 "trtype": "TCP", 00:22:44.926 "adrfam": "IPv4", 00:22:44.926 "traddr": "10.0.0.1", 00:22:44.926 "trsvcid": "56702" 00:22:44.926 }, 00:22:44.926 "auth": { 00:22:44.926 "state": "completed", 00:22:44.926 "digest": "sha512", 00:22:44.926 "dhgroup": "null" 00:22:44.926 } 00:22:44.926 } 00:22:44.926 ]' 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.926 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.185 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:45.753 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.753 11:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.753 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.753 11:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.753 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.753 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.753 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:45.753 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.013 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.272 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.272 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:46.272 { 00:22:46.272 "cntlid": 99, 00:22:46.272 "qid": 0, 00:22:46.272 "state": "enabled", 00:22:46.272 "thread": "nvmf_tgt_poll_group_000", 00:22:46.272 "listen_address": { 00:22:46.272 "trtype": "TCP", 00:22:46.272 "adrfam": "IPv4", 00:22:46.272 "traddr": "10.0.0.2", 00:22:46.272 "trsvcid": "4420" 00:22:46.272 }, 00:22:46.272 "peer_address": { 00:22:46.272 "trtype": "TCP", 00:22:46.272 "adrfam": "IPv4", 00:22:46.272 "traddr": "10.0.0.1", 00:22:46.272 "trsvcid": "56718" 00:22:46.272 }, 00:22:46.272 "auth": { 00:22:46.272 "state": "completed", 00:22:46.272 "digest": "sha512", 00:22:46.272 "dhgroup": "null" 00:22:46.272 } 00:22:46.272 } 00:22:46.272 ]' 00:22:46.531 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:46.531 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.531 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:46.531 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:46.531 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:46.531 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.532 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.532 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.790 11:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.358 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.616 00:22:47.616 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.616 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.616 11:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.874 { 00:22:47.874 "cntlid": 101, 00:22:47.874 "qid": 0, 00:22:47.874 "state": "enabled", 00:22:47.874 "thread": "nvmf_tgt_poll_group_000", 00:22:47.874 "listen_address": { 00:22:47.874 "trtype": "TCP", 00:22:47.874 "adrfam": "IPv4", 00:22:47.874 "traddr": "10.0.0.2", 00:22:47.874 "trsvcid": "4420" 00:22:47.874 }, 00:22:47.874 "peer_address": { 00:22:47.874 "trtype": "TCP", 00:22:47.874 "adrfam": "IPv4", 00:22:47.874 "traddr": "10.0.0.1", 00:22:47.874 "trsvcid": "56742" 00:22:47.874 }, 00:22:47.874 "auth": { 00:22:47.874 "state": "completed", 00:22:47.874 "digest": "sha512", 00:22:47.874 "dhgroup": "null" 00:22:47.874 } 00:22:47.874 } 00:22:47.874 ]' 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.874 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.132 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:48.696 11:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:48.953 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:48.953 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.953 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:48.953 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:48.953 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:48.954 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.954 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:48.954 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.954 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.954 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.954 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.954 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:49.211 00:22:49.211 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.211 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.211 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.469 { 00:22:49.469 "cntlid": 103, 00:22:49.469 "qid": 0, 00:22:49.469 "state": "enabled", 00:22:49.469 "thread": "nvmf_tgt_poll_group_000", 00:22:49.469 "listen_address": { 00:22:49.469 "trtype": "TCP", 00:22:49.469 "adrfam": "IPv4", 00:22:49.469 "traddr": "10.0.0.2", 00:22:49.469 "trsvcid": "4420" 00:22:49.469 }, 00:22:49.469 "peer_address": { 00:22:49.469 "trtype": "TCP", 00:22:49.469 "adrfam": "IPv4", 00:22:49.469 "traddr": "10.0.0.1", 00:22:49.469 "trsvcid": "56760" 00:22:49.469 }, 00:22:49.469 "auth": { 00:22:49.469 "state": "completed", 00:22:49.469 "digest": "sha512", 00:22:49.469 "dhgroup": "null" 00:22:49.469 } 00:22:49.469 } 00:22:49.469 ]' 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.469 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.728 11:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.294 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.553 00:22:50.553 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.553 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.553 11:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.811 { 00:22:50.811 "cntlid": 105, 00:22:50.811 "qid": 0, 00:22:50.811 "state": "enabled", 00:22:50.811 "thread": "nvmf_tgt_poll_group_000", 00:22:50.811 "listen_address": { 00:22:50.811 "trtype": "TCP", 00:22:50.811 "adrfam": "IPv4", 00:22:50.811 "traddr": "10.0.0.2", 00:22:50.811 "trsvcid": "4420" 00:22:50.811 }, 00:22:50.811 "peer_address": { 00:22:50.811 "trtype": "TCP", 00:22:50.811 "adrfam": "IPv4", 00:22:50.811 "traddr": "10.0.0.1", 00:22:50.811 "trsvcid": "39956" 00:22:50.811 }, 00:22:50.811 "auth": { 00:22:50.811 "state": "completed", 00:22:50.811 "digest": "sha512", 00:22:50.811 "dhgroup": "ffdhe2048" 00:22:50.811 } 00:22:50.811 } 00:22:50.811 ]' 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:50.811 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.071 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.071 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.071 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.071 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:51.638 11:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.897 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.156 00:22:52.156 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.156 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.156 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.416 { 00:22:52.416 "cntlid": 107, 00:22:52.416 "qid": 0, 00:22:52.416 "state": "enabled", 00:22:52.416 "thread": "nvmf_tgt_poll_group_000", 00:22:52.416 "listen_address": { 00:22:52.416 "trtype": "TCP", 00:22:52.416 "adrfam": "IPv4", 00:22:52.416 "traddr": "10.0.0.2", 00:22:52.416 "trsvcid": "4420" 00:22:52.416 }, 00:22:52.416 "peer_address": { 00:22:52.416 "trtype": "TCP", 00:22:52.416 "adrfam": "IPv4", 00:22:52.416 "traddr": "10.0.0.1", 00:22:52.416 "trsvcid": "39974" 00:22:52.416 }, 00:22:52.416 "auth": { 00:22:52.416 "state": "completed", 00:22:52.416 "digest": "sha512", 00:22:52.416 "dhgroup": "ffdhe2048" 00:22:52.416 } 00:22:52.416 } 00:22:52.416 ]' 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.416 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.674 11:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.242 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.501 11:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.501 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.501 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.501 00:22:53.501 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:53.501 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:53.501 11:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.760 { 00:22:53.760 "cntlid": 109, 00:22:53.760 "qid": 0, 00:22:53.760 "state": "enabled", 00:22:53.760 "thread": "nvmf_tgt_poll_group_000", 00:22:53.760 "listen_address": { 00:22:53.760 "trtype": "TCP", 00:22:53.760 "adrfam": "IPv4", 00:22:53.760 "traddr": "10.0.0.2", 00:22:53.760 "trsvcid": "4420" 00:22:53.760 }, 00:22:53.760 "peer_address": { 00:22:53.760 "trtype": "TCP", 00:22:53.760 "adrfam": "IPv4", 00:22:53.760 "traddr": "10.0.0.1", 00:22:53.760 "trsvcid": "40010" 00:22:53.760 }, 00:22:53.760 "auth": { 00:22:53.760 "state": "completed", 00:22:53.760 "digest": "sha512", 00:22:53.760 "dhgroup": "ffdhe2048" 00:22:53.760 } 00:22:53.760 } 00:22:53.760 ]' 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:53.760 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.019 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.019 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.019 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.019 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:54.586 11:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.845 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:55.103 00:22:55.103 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.103 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.103 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.362 { 00:22:55.362 "cntlid": 111, 00:22:55.362 "qid": 0, 00:22:55.362 "state": "enabled", 00:22:55.362 "thread": "nvmf_tgt_poll_group_000", 00:22:55.362 "listen_address": { 00:22:55.362 "trtype": "TCP", 00:22:55.362 "adrfam": "IPv4", 00:22:55.362 "traddr": "10.0.0.2", 00:22:55.362 "trsvcid": "4420" 00:22:55.362 }, 00:22:55.362 "peer_address": { 00:22:55.362 "trtype": "TCP", 00:22:55.362 "adrfam": "IPv4", 00:22:55.362 "traddr": "10.0.0.1", 00:22:55.362 "trsvcid": "40034" 00:22:55.362 }, 00:22:55.362 "auth": { 00:22:55.362 "state": "completed", 00:22:55.362 "digest": "sha512", 00:22:55.362 "dhgroup": "ffdhe2048" 00:22:55.362 } 00:22:55.362 } 00:22:55.362 ]' 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.362 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.621 11:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:56.189 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:56.447 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:56.447 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:56.447 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:56.447 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:56.447 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:56.447 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.447 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.448 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.448 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.448 11:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.448 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.448 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.706 00:22:56.706 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:56.706 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:56.706 11:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.706 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.706 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.706 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.706 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.706 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.706 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.706 { 00:22:56.706 "cntlid": 113, 00:22:56.706 "qid": 0, 00:22:56.706 "state": "enabled", 00:22:56.706 "thread": "nvmf_tgt_poll_group_000", 00:22:56.706 "listen_address": { 00:22:56.706 "trtype": "TCP", 00:22:56.706 "adrfam": "IPv4", 00:22:56.706 "traddr": "10.0.0.2", 00:22:56.706 "trsvcid": "4420" 00:22:56.706 }, 00:22:56.706 "peer_address": { 00:22:56.706 "trtype": "TCP", 00:22:56.706 "adrfam": "IPv4", 00:22:56.706 "traddr": "10.0.0.1", 00:22:56.706 "trsvcid": "40062" 00:22:56.706 }, 00:22:56.706 "auth": { 00:22:56.706 "state": "completed", 00:22:56.706 "digest": "sha512", 00:22:56.706 "dhgroup": "ffdhe3072" 00:22:56.706 } 00:22:56.706 } 00:22:56.706 ]' 00:22:56.706 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.965 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.965 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.965 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:56.965 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.965 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.965 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.965 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.224 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:57.792 11:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.792 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.050 00:22:58.050 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.050 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.050 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.309 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.309 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.309 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.309 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.309 11:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.309 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:58.309 { 00:22:58.309 "cntlid": 115, 00:22:58.309 "qid": 0, 00:22:58.309 "state": "enabled", 00:22:58.310 "thread": "nvmf_tgt_poll_group_000", 00:22:58.310 "listen_address": { 00:22:58.310 "trtype": "TCP", 00:22:58.310 "adrfam": "IPv4", 00:22:58.310 "traddr": "10.0.0.2", 00:22:58.310 "trsvcid": "4420" 00:22:58.310 }, 00:22:58.310 "peer_address": { 00:22:58.310 "trtype": "TCP", 00:22:58.310 "adrfam": "IPv4", 00:22:58.310 "traddr": "10.0.0.1", 00:22:58.310 "trsvcid": "40092" 00:22:58.310 }, 00:22:58.310 "auth": { 00:22:58.310 "state": "completed", 00:22:58.310 "digest": "sha512", 00:22:58.310 "dhgroup": "ffdhe3072" 00:22:58.310 } 00:22:58.310 } 00:22:58.310 ]' 00:22:58.310 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.310 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.310 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.310 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:58.310 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.567 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.567 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.567 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.567 11:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:59.136 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.398 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.656 00:22:59.656 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.656 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.656 11:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.915 { 00:22:59.915 "cntlid": 117, 00:22:59.915 "qid": 0, 00:22:59.915 "state": "enabled", 00:22:59.915 "thread": "nvmf_tgt_poll_group_000", 00:22:59.915 "listen_address": { 00:22:59.915 "trtype": "TCP", 00:22:59.915 "adrfam": "IPv4", 00:22:59.915 "traddr": "10.0.0.2", 00:22:59.915 "trsvcid": "4420" 00:22:59.915 }, 00:22:59.915 "peer_address": { 00:22:59.915 "trtype": "TCP", 00:22:59.915 "adrfam": "IPv4", 00:22:59.915 "traddr": "10.0.0.1", 00:22:59.915 "trsvcid": "40122" 00:22:59.915 }, 00:22:59.915 "auth": { 00:22:59.915 "state": "completed", 00:22:59.915 "digest": "sha512", 00:22:59.915 "dhgroup": "ffdhe3072" 00:22:59.915 } 00:22:59.915 } 00:22:59.915 ]' 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.915 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.174 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:23:00.741 11:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.741 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.741 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.741 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.741 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.741 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.741 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:00.741 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:00.999 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.316 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.316 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.316 { 00:23:01.316 "cntlid": 119, 00:23:01.316 "qid": 0, 00:23:01.316 "state": "enabled", 00:23:01.316 "thread": "nvmf_tgt_poll_group_000", 00:23:01.316 "listen_address": { 00:23:01.316 "trtype": "TCP", 00:23:01.316 "adrfam": "IPv4", 00:23:01.316 "traddr": "10.0.0.2", 00:23:01.316 "trsvcid": "4420" 00:23:01.316 }, 00:23:01.316 "peer_address": { 00:23:01.316 "trtype": "TCP", 00:23:01.316 "adrfam": "IPv4", 00:23:01.316 "traddr": "10.0.0.1", 00:23:01.316 "trsvcid": "58612" 00:23:01.316 }, 00:23:01.316 "auth": { 00:23:01.317 "state": "completed", 00:23:01.317 "digest": "sha512", 00:23:01.317 "dhgroup": "ffdhe3072" 00:23:01.317 } 00:23:01.317 } 00:23:01.317 ]' 00:23:01.317 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.317 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.317 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.575 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:01.575 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.576 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.576 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.576 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.576 11:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:23:02.142 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.142 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:02.142 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.142 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.142 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.142 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.143 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.143 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:02.143 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.401 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.660 00:23:02.660 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.660 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.660 11:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.918 { 00:23:02.918 "cntlid": 121, 00:23:02.918 "qid": 0, 00:23:02.918 "state": "enabled", 00:23:02.918 "thread": "nvmf_tgt_poll_group_000", 00:23:02.918 "listen_address": { 00:23:02.918 "trtype": "TCP", 00:23:02.918 "adrfam": "IPv4", 00:23:02.918 "traddr": "10.0.0.2", 00:23:02.918 "trsvcid": "4420" 00:23:02.918 }, 00:23:02.918 "peer_address": { 00:23:02.918 "trtype": "TCP", 00:23:02.918 "adrfam": "IPv4", 00:23:02.918 "traddr": "10.0.0.1", 00:23:02.918 "trsvcid": "58628" 00:23:02.918 }, 00:23:02.918 "auth": { 00:23:02.918 "state": "completed", 00:23:02.918 "digest": "sha512", 00:23:02.918 "dhgroup": "ffdhe4096" 00:23:02.918 } 00:23:02.918 } 00:23:02.918 ]' 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.918 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.177 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:23:03.744 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.744 11:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:03.744 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.744 11:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.744 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.744 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.744 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:03.744 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.003 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.261 00:23:04.261 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.261 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.261 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.519 { 00:23:04.519 "cntlid": 123, 00:23:04.519 "qid": 0, 00:23:04.519 "state": "enabled", 00:23:04.519 "thread": "nvmf_tgt_poll_group_000", 00:23:04.519 "listen_address": { 00:23:04.519 "trtype": "TCP", 00:23:04.519 "adrfam": "IPv4", 00:23:04.519 "traddr": "10.0.0.2", 00:23:04.519 "trsvcid": "4420" 00:23:04.519 }, 00:23:04.519 "peer_address": { 00:23:04.519 "trtype": "TCP", 00:23:04.519 "adrfam": "IPv4", 00:23:04.519 "traddr": "10.0.0.1", 00:23:04.519 "trsvcid": "58658" 00:23:04.519 }, 00:23:04.519 "auth": { 00:23:04.519 "state": "completed", 00:23:04.519 "digest": "sha512", 00:23:04.519 "dhgroup": "ffdhe4096" 00:23:04.519 } 00:23:04.519 } 00:23:04.519 ]' 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.519 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.777 11:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:05.343 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.602 11:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.860 00:23:05.860 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.860 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.861 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.119 { 00:23:06.119 "cntlid": 125, 00:23:06.119 "qid": 0, 00:23:06.119 "state": "enabled", 00:23:06.119 "thread": "nvmf_tgt_poll_group_000", 00:23:06.119 "listen_address": { 00:23:06.119 "trtype": "TCP", 00:23:06.119 "adrfam": "IPv4", 00:23:06.119 "traddr": "10.0.0.2", 00:23:06.119 "trsvcid": "4420" 00:23:06.119 }, 00:23:06.119 "peer_address": { 00:23:06.119 "trtype": "TCP", 00:23:06.119 "adrfam": "IPv4", 00:23:06.119 "traddr": "10.0.0.1", 00:23:06.119 "trsvcid": "58686" 00:23:06.119 }, 00:23:06.119 "auth": { 00:23:06.119 "state": "completed", 00:23:06.119 "digest": "sha512", 00:23:06.119 "dhgroup": "ffdhe4096" 00:23:06.119 } 00:23:06.119 } 00:23:06.119 ]' 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.119 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.376 11:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:06.942 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:07.200 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:07.200 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.459 { 00:23:07.459 "cntlid": 127, 00:23:07.459 "qid": 0, 00:23:07.459 "state": "enabled", 00:23:07.459 "thread": "nvmf_tgt_poll_group_000", 00:23:07.459 "listen_address": { 00:23:07.459 "trtype": "TCP", 00:23:07.459 "adrfam": "IPv4", 00:23:07.459 "traddr": "10.0.0.2", 00:23:07.459 "trsvcid": "4420" 00:23:07.459 }, 00:23:07.459 "peer_address": { 00:23:07.459 "trtype": "TCP", 00:23:07.459 "adrfam": "IPv4", 00:23:07.459 "traddr": "10.0.0.1", 00:23:07.459 "trsvcid": "58722" 00:23:07.459 }, 00:23:07.459 "auth": { 00:23:07.459 "state": "completed", 00:23:07.459 "digest": "sha512", 00:23:07.459 "dhgroup": "ffdhe4096" 00:23:07.459 } 00:23:07.459 } 00:23:07.459 ]' 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.459 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.718 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:07.718 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.718 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.718 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.718 11:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.718 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:23:08.286 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.286 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:08.286 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.286 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.569 11:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.827 00:23:08.827 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.827 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.827 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.086 { 00:23:09.086 "cntlid": 129, 00:23:09.086 "qid": 0, 00:23:09.086 "state": "enabled", 00:23:09.086 "thread": "nvmf_tgt_poll_group_000", 00:23:09.086 "listen_address": { 00:23:09.086 "trtype": "TCP", 00:23:09.086 "adrfam": "IPv4", 00:23:09.086 "traddr": "10.0.0.2", 00:23:09.086 "trsvcid": "4420" 00:23:09.086 }, 00:23:09.086 "peer_address": { 00:23:09.086 "trtype": "TCP", 00:23:09.086 "adrfam": "IPv4", 00:23:09.086 "traddr": "10.0.0.1", 00:23:09.086 "trsvcid": "58750" 00:23:09.086 }, 00:23:09.086 "auth": { 00:23:09.086 "state": "completed", 00:23:09.086 "digest": "sha512", 00:23:09.086 "dhgroup": "ffdhe6144" 00:23:09.086 } 00:23:09.086 } 00:23:09.086 ]' 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:09.086 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.344 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.344 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.344 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.345 11:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:09.913 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.172 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.431 00:23:10.431 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:10.431 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:10.431 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.690 { 00:23:10.690 "cntlid": 131, 00:23:10.690 "qid": 0, 00:23:10.690 "state": "enabled", 00:23:10.690 "thread": "nvmf_tgt_poll_group_000", 00:23:10.690 "listen_address": { 00:23:10.690 "trtype": "TCP", 00:23:10.690 "adrfam": "IPv4", 00:23:10.690 "traddr": "10.0.0.2", 00:23:10.690 "trsvcid": "4420" 00:23:10.690 }, 00:23:10.690 "peer_address": { 00:23:10.690 "trtype": "TCP", 00:23:10.690 "adrfam": "IPv4", 00:23:10.690 "traddr": "10.0.0.1", 00:23:10.690 "trsvcid": "37224" 00:23:10.690 }, 00:23:10.690 "auth": { 00:23:10.690 "state": "completed", 00:23:10.690 "digest": "sha512", 00:23:10.690 "dhgroup": "ffdhe6144" 00:23:10.690 } 00:23:10.690 } 00:23:10.690 ]' 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.690 11:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.691 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:10.691 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.950 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.950 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.950 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.950 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:11.517 11:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.777 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.036 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:12.295 { 00:23:12.295 "cntlid": 133, 00:23:12.295 "qid": 0, 00:23:12.295 "state": "enabled", 00:23:12.295 "thread": "nvmf_tgt_poll_group_000", 00:23:12.295 "listen_address": { 00:23:12.295 "trtype": "TCP", 00:23:12.295 "adrfam": "IPv4", 00:23:12.295 "traddr": "10.0.0.2", 00:23:12.295 "trsvcid": "4420" 00:23:12.295 }, 00:23:12.295 "peer_address": { 00:23:12.295 "trtype": "TCP", 00:23:12.295 "adrfam": "IPv4", 00:23:12.295 "traddr": "10.0.0.1", 00:23:12.295 "trsvcid": "37250" 00:23:12.295 }, 00:23:12.295 "auth": { 00:23:12.295 "state": "completed", 00:23:12.295 "digest": "sha512", 00:23:12.295 "dhgroup": "ffdhe6144" 00:23:12.295 } 00:23:12.295 } 00:23:12.295 ]' 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.295 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:12.554 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:12.554 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:12.554 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.554 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.554 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.554 11:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:23:13.121 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.121 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:13.121 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.121 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.121 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.380 11:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.638 00:23:13.897 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.897 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.898 { 00:23:13.898 "cntlid": 135, 00:23:13.898 "qid": 0, 00:23:13.898 "state": "enabled", 00:23:13.898 "thread": "nvmf_tgt_poll_group_000", 00:23:13.898 "listen_address": { 00:23:13.898 "trtype": "TCP", 00:23:13.898 "adrfam": "IPv4", 00:23:13.898 "traddr": "10.0.0.2", 00:23:13.898 "trsvcid": "4420" 00:23:13.898 }, 00:23:13.898 "peer_address": { 00:23:13.898 "trtype": "TCP", 00:23:13.898 "adrfam": "IPv4", 00:23:13.898 "traddr": "10.0.0.1", 00:23:13.898 "trsvcid": "37280" 00:23:13.898 }, 00:23:13.898 "auth": { 00:23:13.898 "state": "completed", 00:23:13.898 "digest": "sha512", 00:23:13.898 "dhgroup": "ffdhe6144" 00:23:13.898 } 00:23:13.898 } 00:23:13.898 ]' 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.898 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.157 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:14.157 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.157 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.157 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.157 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.157 11:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:23:14.724 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.724 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.724 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.724 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.982 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.549 00:23:15.549 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.549 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.549 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.808 { 00:23:15.808 "cntlid": 137, 00:23:15.808 "qid": 0, 00:23:15.808 "state": "enabled", 00:23:15.808 "thread": "nvmf_tgt_poll_group_000", 00:23:15.808 "listen_address": { 00:23:15.808 "trtype": "TCP", 00:23:15.808 "adrfam": "IPv4", 00:23:15.808 "traddr": "10.0.0.2", 00:23:15.808 "trsvcid": "4420" 00:23:15.808 }, 00:23:15.808 "peer_address": { 00:23:15.808 "trtype": "TCP", 00:23:15.808 "adrfam": "IPv4", 00:23:15.808 "traddr": "10.0.0.1", 00:23:15.808 "trsvcid": "37306" 00:23:15.808 }, 00:23:15.808 "auth": { 00:23:15.808 "state": "completed", 00:23:15.808 "digest": "sha512", 00:23:15.808 "dhgroup": "ffdhe8192" 00:23:15.808 } 00:23:15.808 } 00:23:15.808 ]' 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.808 11:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.808 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:15.808 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.808 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.808 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.808 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.066 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:16.634 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.893 11:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.893 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.893 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.893 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.151 00:23:17.151 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.151 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.151 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.409 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.409 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.409 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.409 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.409 11:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.409 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.410 { 00:23:17.410 "cntlid": 139, 00:23:17.410 "qid": 0, 00:23:17.410 "state": "enabled", 00:23:17.410 "thread": "nvmf_tgt_poll_group_000", 00:23:17.410 "listen_address": { 00:23:17.410 "trtype": "TCP", 00:23:17.410 "adrfam": "IPv4", 00:23:17.410 "traddr": "10.0.0.2", 00:23:17.410 "trsvcid": "4420" 00:23:17.410 }, 00:23:17.410 "peer_address": { 00:23:17.410 "trtype": "TCP", 00:23:17.410 "adrfam": "IPv4", 00:23:17.410 "traddr": "10.0.0.1", 00:23:17.410 "trsvcid": "37340" 00:23:17.410 }, 00:23:17.410 "auth": { 00:23:17.410 "state": "completed", 00:23:17.410 "digest": "sha512", 00:23:17.410 "dhgroup": "ffdhe8192" 00:23:17.410 } 00:23:17.410 } 00:23:17.410 ]' 00:23:17.410 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.410 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.410 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.410 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:17.410 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.667 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.667 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.667 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.667 11:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWU5ZmY0N2ExMTQwNTA3Nzg1Y2YwZjdiMmJmNWRjNmRWCiWo: --dhchap-ctrl-secret DHHC-1:02:OWY5OWNmOTE1M2E4OGY0YzJjNTk3MmY1ZDlkMzMxN2UxMmMxNjk1MTMyY2JhZjNhzOR5MQ==: 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:18.235 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.494 11:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.062 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:19.062 { 00:23:19.062 "cntlid": 141, 00:23:19.062 "qid": 0, 00:23:19.062 "state": "enabled", 00:23:19.062 "thread": "nvmf_tgt_poll_group_000", 00:23:19.062 "listen_address": { 00:23:19.062 "trtype": "TCP", 00:23:19.062 "adrfam": "IPv4", 00:23:19.062 "traddr": "10.0.0.2", 00:23:19.062 "trsvcid": "4420" 00:23:19.062 }, 00:23:19.062 "peer_address": { 00:23:19.062 "trtype": "TCP", 00:23:19.062 "adrfam": "IPv4", 00:23:19.062 "traddr": "10.0.0.1", 00:23:19.062 "trsvcid": "37378" 00:23:19.062 }, 00:23:19.062 "auth": { 00:23:19.062 "state": "completed", 00:23:19.062 "digest": "sha512", 00:23:19.062 "dhgroup": "ffdhe8192" 00:23:19.062 } 00:23:19.062 } 00:23:19.062 ]' 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.062 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:19.321 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:19.321 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.321 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.321 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.321 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.321 11:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Y2ZlYzI4N2QzNjVkZTFiOTAwMzlhZjc5NDNjNTJmYzYxMDkwN2QyNDI4YjFkYjE3pX6SEg==: --dhchap-ctrl-secret DHHC-1:01:NjNkYzcwMmI2YmUyZmY2OGRhMGFlYjdhNTA0YjYyZmKU+3WZ: 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:19.890 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:20.149 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:20.719 00:23:20.719 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.719 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.719 11:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.978 { 00:23:20.978 "cntlid": 143, 00:23:20.978 "qid": 0, 00:23:20.978 "state": "enabled", 00:23:20.978 "thread": "nvmf_tgt_poll_group_000", 00:23:20.978 "listen_address": { 00:23:20.978 "trtype": "TCP", 00:23:20.978 "adrfam": "IPv4", 00:23:20.978 "traddr": "10.0.0.2", 00:23:20.978 "trsvcid": "4420" 00:23:20.978 }, 00:23:20.978 "peer_address": { 00:23:20.978 "trtype": "TCP", 00:23:20.978 "adrfam": "IPv4", 00:23:20.978 "traddr": "10.0.0.1", 00:23:20.978 "trsvcid": "55952" 00:23:20.978 }, 00:23:20.978 "auth": { 00:23:20.978 "state": "completed", 00:23:20.978 "digest": "sha512", 00:23:20.978 "dhgroup": "ffdhe8192" 00:23:20.978 } 00:23:20.978 } 00:23:20.978 ]' 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.978 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.237 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.805 11:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.805 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.064 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.064 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.064 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.323 00:23:22.323 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:22.323 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:22.323 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.581 { 00:23:22.581 "cntlid": 145, 00:23:22.581 "qid": 0, 00:23:22.581 "state": "enabled", 00:23:22.581 "thread": "nvmf_tgt_poll_group_000", 00:23:22.581 "listen_address": { 00:23:22.581 "trtype": "TCP", 00:23:22.581 "adrfam": "IPv4", 00:23:22.581 "traddr": "10.0.0.2", 00:23:22.581 "trsvcid": "4420" 00:23:22.581 }, 00:23:22.581 "peer_address": { 00:23:22.581 "trtype": "TCP", 00:23:22.581 "adrfam": "IPv4", 00:23:22.581 "traddr": "10.0.0.1", 00:23:22.581 "trsvcid": "55962" 00:23:22.581 }, 00:23:22.581 "auth": { 00:23:22.581 "state": "completed", 00:23:22.581 "digest": "sha512", 00:23:22.581 "dhgroup": "ffdhe8192" 00:23:22.581 } 00:23:22.581 } 00:23:22.581 ]' 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:22.581 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.840 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.840 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.840 11:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.840 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OGViNjk2MGQ5NDcwOTc1ODBiZjQwMGMxNjZiZWVlMWI2NjU1YWI1ZjRlNDkzZTYzObR9Ww==: --dhchap-ctrl-secret DHHC-1:03:ZTM4YjBiYWZjMjBhMDY2YWQzYTRjMjI2MmI1NWNhYzY2ZDk0YmIwNTIxM2Q0MjlkZDM3ZjIwMTVhNTEyMGJkZLp213Q=: 00:23:23.472 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:23.473 11:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:24.042 request: 00:23:24.042 { 00:23:24.042 "name": "nvme0", 00:23:24.042 "trtype": "tcp", 00:23:24.042 "traddr": "10.0.0.2", 00:23:24.042 "adrfam": "ipv4", 00:23:24.042 "trsvcid": "4420", 00:23:24.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:24.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:24.042 "prchk_reftag": false, 00:23:24.042 "prchk_guard": false, 00:23:24.042 "hdgst": false, 00:23:24.042 "ddgst": false, 00:23:24.042 "dhchap_key": "key2", 00:23:24.042 "method": "bdev_nvme_attach_controller", 00:23:24.042 "req_id": 1 00:23:24.042 } 00:23:24.042 Got JSON-RPC error response 00:23:24.042 response: 00:23:24.042 { 00:23:24.042 "code": -5, 00:23:24.042 "message": "Input/output error" 00:23:24.042 } 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:24.042 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:24.302 request: 00:23:24.302 { 00:23:24.302 "name": "nvme0", 00:23:24.302 "trtype": "tcp", 00:23:24.302 "traddr": "10.0.0.2", 00:23:24.302 "adrfam": "ipv4", 00:23:24.302 "trsvcid": "4420", 00:23:24.302 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:24.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:24.302 "prchk_reftag": false, 00:23:24.302 "prchk_guard": false, 00:23:24.302 "hdgst": false, 00:23:24.302 "ddgst": false, 00:23:24.302 "dhchap_key": "key1", 00:23:24.302 "dhchap_ctrlr_key": "ckey2", 00:23:24.302 "method": "bdev_nvme_attach_controller", 00:23:24.302 "req_id": 1 00:23:24.302 } 00:23:24.302 Got JSON-RPC error response 00:23:24.302 response: 00:23:24.302 { 00:23:24.302 "code": -5, 00:23:24.302 "message": "Input/output error" 00:23:24.302 } 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.302 11:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.871 request: 00:23:24.871 { 00:23:24.871 "name": "nvme0", 00:23:24.871 "trtype": "tcp", 00:23:24.871 "traddr": "10.0.0.2", 00:23:24.872 "adrfam": "ipv4", 00:23:24.872 "trsvcid": "4420", 00:23:24.872 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:24.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:24.872 "prchk_reftag": false, 00:23:24.872 "prchk_guard": false, 00:23:24.872 "hdgst": false, 00:23:24.872 "ddgst": false, 00:23:24.872 "dhchap_key": "key1", 00:23:24.872 "dhchap_ctrlr_key": "ckey1", 00:23:24.872 "method": "bdev_nvme_attach_controller", 00:23:24.872 "req_id": 1 00:23:24.872 } 00:23:24.872 Got JSON-RPC error response 00:23:24.872 response: 00:23:24.872 { 00:23:24.872 "code": -5, 00:23:24.872 "message": "Input/output error" 00:23:24.872 } 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 953124 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 953124 ']' 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 953124 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953124 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953124' 00:23:24.872 killing process with pid 953124 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 953124 00:23:24.872 11:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 953124 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=973878 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 973878 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 973878 ']' 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.249 11:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 973878 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 973878 ']' 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.224 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.483 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.483 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:23:27.483 11:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:27.483 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.483 11:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.742 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.310 00:23:28.310 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.310 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.310 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.569 { 00:23:28.569 "cntlid": 1, 00:23:28.569 "qid": 0, 00:23:28.569 "state": "enabled", 00:23:28.569 "thread": "nvmf_tgt_poll_group_000", 00:23:28.569 "listen_address": { 00:23:28.569 "trtype": "TCP", 00:23:28.569 "adrfam": "IPv4", 00:23:28.569 "traddr": "10.0.0.2", 00:23:28.569 "trsvcid": "4420" 00:23:28.569 }, 00:23:28.569 "peer_address": { 00:23:28.569 "trtype": "TCP", 00:23:28.569 "adrfam": "IPv4", 00:23:28.569 "traddr": "10.0.0.1", 00:23:28.569 "trsvcid": "56034" 00:23:28.569 }, 00:23:28.569 "auth": { 00:23:28.569 "state": "completed", 00:23:28.569 "digest": "sha512", 00:23:28.569 "dhgroup": "ffdhe8192" 00:23:28.569 } 00:23:28.569 } 00:23:28.569 ]' 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.569 11:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.828 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjA5NzAxYjA2YjllMjVlNzVkZDY5NTI1YjY3MDNiOGI1YzlkMDNjMWJkZmM0MGQ4M2ZiMTE4ZjlmNTZlMmFjY++oi2c=: 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:29.397 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.656 request: 00:23:29.656 { 00:23:29.656 "name": "nvme0", 00:23:29.656 "trtype": "tcp", 00:23:29.656 "traddr": "10.0.0.2", 00:23:29.656 "adrfam": "ipv4", 00:23:29.656 "trsvcid": "4420", 00:23:29.656 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:29.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:29.656 "prchk_reftag": false, 00:23:29.656 "prchk_guard": false, 00:23:29.656 "hdgst": false, 00:23:29.656 "ddgst": false, 00:23:29.656 "dhchap_key": "key3", 00:23:29.656 "method": "bdev_nvme_attach_controller", 00:23:29.656 "req_id": 1 00:23:29.656 } 00:23:29.656 Got JSON-RPC error response 00:23:29.656 response: 00:23:29.656 { 00:23:29.656 "code": -5, 00:23:29.656 "message": "Input/output error" 00:23:29.656 } 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:29.656 11:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.915 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:30.175 request: 00:23:30.175 { 00:23:30.175 "name": "nvme0", 00:23:30.175 "trtype": "tcp", 00:23:30.175 "traddr": "10.0.0.2", 00:23:30.175 "adrfam": "ipv4", 00:23:30.175 "trsvcid": "4420", 00:23:30.175 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:30.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:30.175 "prchk_reftag": false, 00:23:30.175 "prchk_guard": false, 00:23:30.175 "hdgst": false, 00:23:30.175 "ddgst": false, 00:23:30.175 "dhchap_key": "key3", 00:23:30.175 "method": "bdev_nvme_attach_controller", 00:23:30.175 "req_id": 1 00:23:30.175 } 00:23:30.175 Got JSON-RPC error response 00:23:30.175 response: 00:23:30.175 { 00:23:30.175 "code": -5, 00:23:30.175 "message": "Input/output error" 00:23:30.175 } 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:30.175 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:30.434 request: 00:23:30.434 { 00:23:30.434 "name": "nvme0", 00:23:30.434 "trtype": "tcp", 00:23:30.434 "traddr": "10.0.0.2", 00:23:30.434 "adrfam": "ipv4", 00:23:30.434 "trsvcid": "4420", 00:23:30.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:30.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:30.434 "prchk_reftag": false, 00:23:30.434 "prchk_guard": false, 00:23:30.434 "hdgst": false, 00:23:30.434 "ddgst": false, 00:23:30.434 "dhchap_key": "key0", 00:23:30.434 "dhchap_ctrlr_key": "key1", 00:23:30.434 "method": "bdev_nvme_attach_controller", 00:23:30.434 "req_id": 1 00:23:30.434 } 00:23:30.434 Got JSON-RPC error response 00:23:30.434 response: 00:23:30.434 { 00:23:30.434 "code": -5, 00:23:30.434 "message": "Input/output error" 00:23:30.434 } 00:23:30.434 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:30.434 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:30.434 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:30.434 11:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:30.434 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:30.435 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:30.694 00:23:30.694 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:30.694 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:30.694 11:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 953282 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 953282 ']' 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 953282 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.953 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953282 00:23:31.212 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:31.213 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:31.213 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953282' 00:23:31.213 killing process with pid 953282 00:23:31.213 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 953282 00:23:31.213 11:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 953282 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.749 rmmod nvme_tcp 00:23:33.749 rmmod nvme_fabrics 00:23:33.749 rmmod nvme_keyring 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 973878 ']' 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 973878 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 973878 ']' 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 973878 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 973878 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 973878' 00:23:33.749 killing process with pid 973878 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 973878 00:23:33.749 11:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 973878 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.127 11:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.031 11:29:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:37.031 11:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5Yw /tmp/spdk.key-sha256.RsA /tmp/spdk.key-sha384.X8M /tmp/spdk.key-sha512.zeA /tmp/spdk.key-sha512.zjH /tmp/spdk.key-sha384.DOI /tmp/spdk.key-sha256.9Cx '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:37.031 00:23:37.031 real 2m16.441s 00:23:37.031 user 5m10.321s 00:23:37.031 sys 0m20.597s 00:23:37.031 11:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:37.031 11:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.031 ************************************ 00:23:37.031 END TEST nvmf_auth_target 00:23:37.031 ************************************ 00:23:37.031 11:29:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:37.031 11:29:23 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:37.031 11:29:23 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:37.031 11:29:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:37.031 11:29:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:37.031 11:29:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.031 ************************************ 00:23:37.031 START TEST nvmf_bdevio_no_huge 00:23:37.031 ************************************ 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:37.031 * Looking for test storage... 00:23:37.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.031 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:37.290 11:29:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:42.561 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:42.561 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:42.561 Found net devices under 0000:86:00.0: cvl_0_0 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:42.561 Found net devices under 0000:86:00.1: cvl_0_1 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:42.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:23:42.561 00:23:42.561 --- 10.0.0.2 ping statistics --- 00:23:42.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.561 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:23:42.561 00:23:42.561 --- 10.0.0.1 ping statistics --- 00:23:42.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.561 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=978605 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 978605 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 978605 ']' 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.561 11:29:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:42.561 [2024-07-12 11:29:28.877128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:42.561 [2024-07-12 11:29:28.877249] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:42.819 [2024-07-12 11:29:29.006660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.079 [2024-07-12 11:29:29.248317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.079 [2024-07-12 11:29:29.248365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.079 [2024-07-12 11:29:29.248383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.079 [2024-07-12 11:29:29.248392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.079 [2024-07-12 11:29:29.248401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.079 [2024-07-12 11:29:29.248572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:43.079 [2024-07-12 11:29:29.248664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:43.079 [2024-07-12 11:29:29.248728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.079 [2024-07-12 11:29:29.248753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.338 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.596 [2024-07-12 11:29:29.700986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.596 Malloc0 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.596 [2024-07-12 11:29:29.808391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.596 { 00:23:43.596 "params": { 00:23:43.596 "name": "Nvme$subsystem", 00:23:43.596 "trtype": "$TEST_TRANSPORT", 00:23:43.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.596 "adrfam": "ipv4", 00:23:43.596 "trsvcid": "$NVMF_PORT", 00:23:43.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.596 "hdgst": ${hdgst:-false}, 00:23:43.596 "ddgst": ${ddgst:-false} 00:23:43.596 }, 00:23:43.596 "method": "bdev_nvme_attach_controller" 00:23:43.596 } 00:23:43.596 EOF 00:23:43.596 )") 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:43.596 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:43.597 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:43.597 11:29:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:43.597 "params": { 00:23:43.597 "name": "Nvme1", 00:23:43.597 "trtype": "tcp", 00:23:43.597 "traddr": "10.0.0.2", 00:23:43.597 "adrfam": "ipv4", 00:23:43.597 "trsvcid": "4420", 00:23:43.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.597 "hdgst": false, 00:23:43.597 "ddgst": false 00:23:43.597 }, 00:23:43.597 "method": "bdev_nvme_attach_controller" 00:23:43.597 }' 00:23:43.597 [2024-07-12 11:29:29.882750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:43.597 [2024-07-12 11:29:29.882838] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid978855 ] 00:23:43.855 [2024-07-12 11:29:30.000219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:44.112 [2024-07-12 11:29:30.229629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.112 [2024-07-12 11:29:30.229696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.112 [2024-07-12 11:29:30.229701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.371 I/O targets: 00:23:44.371 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:44.371 00:23:44.371 00:23:44.371 CUnit - A unit testing framework for C - Version 2.1-3 00:23:44.371 http://cunit.sourceforge.net/ 00:23:44.371 00:23:44.371 00:23:44.371 Suite: bdevio tests on: Nvme1n1 00:23:44.371 Test: blockdev write read block ...passed 00:23:44.630 Test: blockdev write zeroes read block ...passed 00:23:44.630 Test: blockdev write zeroes read no split ...passed 00:23:44.630 Test: blockdev write zeroes read split ...passed 00:23:44.630 Test: blockdev write zeroes read split partial ...passed 00:23:44.630 Test: blockdev reset ...[2024-07-12 11:29:30.901697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.630 [2024-07-12 11:29:30.901813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b200 (9): Bad file descriptor 00:23:44.630 [2024-07-12 11:29:30.962788] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:44.630 passed 00:23:44.630 Test: blockdev write read 8 blocks ...passed 00:23:44.630 Test: blockdev write read size > 128k ...passed 00:23:44.630 Test: blockdev write read invalid size ...passed 00:23:44.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:44.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:44.890 Test: blockdev write read max offset ...passed 00:23:44.890 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:44.890 Test: blockdev writev readv 8 blocks ...passed 00:23:44.890 Test: blockdev writev readv 30 x 1block ...passed 00:23:44.890 Test: blockdev writev readv block ...passed 00:23:44.890 Test: blockdev writev readv size > 128k ...passed 00:23:44.890 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:44.890 Test: blockdev comparev and writev ...[2024-07-12 11:29:31.179069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.179120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.890 [2024-07-12 11:29:31.179141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.179154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:44.890 [2024-07-12 11:29:31.179460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.179477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:44.890 [2024-07-12 11:29:31.179497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.179509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:44.890 [2024-07-12 11:29:31.179785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.179803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:44.890 [2024-07-12 11:29:31.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:44.890 [2024-07-12 11:29:31.180106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.180126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:44.890 [2024-07-12 11:29:31.180143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:44.890 [2024-07-12 11:29:31.180154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:44.890 passed 00:23:45.149 Test: blockdev nvme passthru rw ...passed 00:23:45.149 Test: blockdev nvme passthru vendor specific ...[2024-07-12 11:29:31.262826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.149 [2024-07-12 11:29:31.262864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.149 [2024-07-12 11:29:31.262998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.149 [2024-07-12 11:29:31.263013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.149 [2024-07-12 11:29:31.263133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.149 [2024-07-12 11:29:31.263147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.149 [2024-07-12 11:29:31.263274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:45.149 [2024-07-12 11:29:31.263289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.149 passed 00:23:45.149 Test: blockdev nvme admin passthru ...passed 00:23:45.149 Test: blockdev copy ...passed 00:23:45.149 00:23:45.149 Run Summary: Type Total Ran Passed Failed Inactive 00:23:45.149 suites 1 1 n/a 0 0 00:23:45.149 tests 23 23 23 0 0 00:23:45.150 asserts 152 152 152 0 n/a 00:23:45.150 00:23:45.150 Elapsed time = 1.352 seconds 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.718 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.718 rmmod nvme_tcp 00:23:45.718 rmmod nvme_fabrics 00:23:45.718 rmmod nvme_keyring 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 978605 ']' 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 978605 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 978605 ']' 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 978605 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 978605 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 978605' 00:23:45.978 killing process with pid 978605 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 978605 00:23:45.978 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 978605 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.914 11:29:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.818 11:29:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.818 00:23:48.818 real 0m11.701s 00:23:48.818 user 0m19.545s 00:23:48.818 sys 0m5.174s 00:23:48.818 11:29:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.818 11:29:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:48.818 ************************************ 00:23:48.818 END TEST nvmf_bdevio_no_huge 00:23:48.818 ************************************ 00:23:48.818 11:29:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:48.818 11:29:35 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:48.818 11:29:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:48.818 11:29:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.818 11:29:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.818 ************************************ 00:23:48.818 START TEST nvmf_tls 00:23:48.818 ************************************ 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:48.818 * Looking for test storage... 00:23:48.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.818 11:29:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.819 11:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:54.138 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:54.138 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:54.138 Found net devices under 0000:86:00.0: cvl_0_0 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.138 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:54.139 Found net devices under 0000:86:00.1: cvl_0_1 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:54.139 00:23:54.139 --- 10.0.0.2 ping statistics --- 00:23:54.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.139 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:23:54.139 00:23:54.139 --- 10.0.0.1 ping statistics --- 00:23:54.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.139 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=982787 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 982787 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 982787 ']' 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.139 11:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.139 [2024-07-12 11:29:40.491441] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.139 [2024-07-12 11:29:40.491528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.397 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.397 [2024-07-12 11:29:40.600200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.655 [2024-07-12 11:29:40.811328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.655 [2024-07-12 11:29:40.811374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.655 [2024-07-12 11:29:40.811390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.655 [2024-07-12 11:29:40.811400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.655 [2024-07-12 11:29:40.811409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.655 [2024-07-12 11:29:40.811436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.914 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.914 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:54.914 11:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.914 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.914 11:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.173 11:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.173 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:55.173 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:55.173 true 00:23:55.173 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:55.173 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:55.431 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:55.431 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:55.431 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:55.690 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:55.690 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:55.690 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:55.690 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:55.690 11:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:55.948 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:55.948 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:56.207 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:56.207 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:56.207 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:56.207 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:56.207 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:56.207 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:56.207 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:56.465 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:56.465 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:56.723 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:56.723 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:56.724 11:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:56.724 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:56.724 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.2OOpA8GfQW 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.vJ2nm1dGbt 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2OOpA8GfQW 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.vJ2nm1dGbt 00:23:56.982 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:57.240 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:57.810 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2OOpA8GfQW 00:23:57.810 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2OOpA8GfQW 00:23:57.810 11:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.810 [2024-07-12 11:29:44.127818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.810 11:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:58.069 11:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:58.329 [2024-07-12 11:29:44.472823] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.329 [2024-07-12 11:29:44.473054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.329 11:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:58.587 malloc0 00:23:58.587 11:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:58.587 11:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2OOpA8GfQW 00:23:58.846 [2024-07-12 11:29:45.036631] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.846 11:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2OOpA8GfQW 00:23:58.846 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.050 Initializing NVMe Controllers 00:24:11.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.050 Initialization complete. Launching workers. 00:24:11.050 ======================================================== 00:24:11.050 Latency(us) 00:24:11.050 Device Information : IOPS MiB/s Average min max 00:24:11.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12791.23 49.97 5004.22 1294.62 6594.76 00:24:11.050 ======================================================== 00:24:11.050 Total : 12791.23 49.97 5004.22 1294.62 6594.76 00:24:11.050 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2OOpA8GfQW 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2OOpA8GfQW' 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=985178 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 985178 /var/tmp/bdevperf.sock 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 985178 ']' 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.050 11:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.051 11:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.051 11:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.051 [2024-07-12 11:29:55.321906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:11.051 [2024-07-12 11:29:55.321997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985178 ] 00:24:11.051 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.051 [2024-07-12 11:29:55.419060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.051 [2024-07-12 11:29:55.640658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.051 11:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.051 11:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:11.051 11:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2OOpA8GfQW 00:24:11.051 [2024-07-12 11:29:56.245799] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.051 [2024-07-12 11:29:56.245903] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:11.051 TLSTESTn1 00:24:11.051 11:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:11.051 Running I/O for 10 seconds... 00:24:21.025 00:24:21.025 Latency(us) 00:24:21.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.025 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:21.025 Verification LBA range: start 0x0 length 0x2000 00:24:21.025 TLSTESTn1 : 10.02 4501.17 17.58 0.00 0.00 28392.65 7522.39 30545.47 00:24:21.025 =================================================================================================================== 00:24:21.025 Total : 4501.17 17.58 0.00 0.00 28392.65 7522.39 30545.47 00:24:21.025 0 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 985178 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 985178 ']' 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 985178 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 985178 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 985178' 00:24:21.025 killing process with pid 985178 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 985178 00:24:21.025 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.025 00:24:21.025 Latency(us) 00:24:21.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.025 =================================================================================================================== 00:24:21.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.025 [2024-07-12 11:30:06.524878] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:21.025 11:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 985178 00:24:21.283 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vJ2nm1dGbt 00:24:21.283 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:21.283 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vJ2nm1dGbt 00:24:21.283 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:21.283 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vJ2nm1dGbt 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vJ2nm1dGbt' 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=987361 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 987361 /var/tmp/bdevperf.sock 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 987361 ']' 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.284 11:30:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.543 [2024-07-12 11:30:07.667373] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:21.543 [2024-07-12 11:30:07.667466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987361 ] 00:24:21.543 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.543 [2024-07-12 11:30:07.764397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.801 [2024-07-12 11:30:07.987655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vJ2nm1dGbt 00:24:22.367 [2024-07-12 11:30:08.620472] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.367 [2024-07-12 11:30:08.620585] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:22.367 [2024-07-12 11:30:08.631499] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:22.367 [2024-07-12 11:30:08.631781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (107): Transport endpoint is not connected 00:24:22.367 [2024-07-12 11:30:08.632760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:24:22.367 [2024-07-12 11:30:08.633756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.367 [2024-07-12 11:30:08.633783] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:22.367 [2024-07-12 11:30:08.633800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.367 request: 00:24:22.367 { 00:24:22.367 "name": "TLSTEST", 00:24:22.367 "trtype": "tcp", 00:24:22.367 "traddr": "10.0.0.2", 00:24:22.367 "adrfam": "ipv4", 00:24:22.367 "trsvcid": "4420", 00:24:22.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.367 "prchk_reftag": false, 00:24:22.367 "prchk_guard": false, 00:24:22.367 "hdgst": false, 00:24:22.367 "ddgst": false, 00:24:22.367 "psk": "/tmp/tmp.vJ2nm1dGbt", 00:24:22.367 "method": "bdev_nvme_attach_controller", 00:24:22.367 "req_id": 1 00:24:22.367 } 00:24:22.367 Got JSON-RPC error response 00:24:22.367 response: 00:24:22.367 { 00:24:22.367 "code": -5, 00:24:22.367 "message": "Input/output error" 00:24:22.367 } 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 987361 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 987361 ']' 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 987361 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 987361 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 987361' 00:24:22.367 killing process with pid 987361 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 987361 00:24:22.367 Received shutdown signal, test time was about 10.000000 seconds 00:24:22.367 00:24:22.367 Latency(us) 00:24:22.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.367 =================================================================================================================== 00:24:22.367 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:22.367 [2024-07-12 11:30:08.680926] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:22.367 11:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 987361 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2OOpA8GfQW 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2OOpA8GfQW 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2OOpA8GfQW 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2OOpA8GfQW' 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=987954 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 987954 /var/tmp/bdevperf.sock 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 987954 ']' 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.745 11:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.745 [2024-07-12 11:30:09.803637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:23.745 [2024-07-12 11:30:09.803725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987954 ] 00:24:23.745 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.745 [2024-07-12 11:30:09.901496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.004 [2024-07-12 11:30:10.131238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.261 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.261 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:24.261 11:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2OOpA8GfQW 00:24:24.520 [2024-07-12 11:30:10.743847] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.520 [2024-07-12 11:30:10.743948] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:24.520 [2024-07-12 11:30:10.756781] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:24.520 [2024-07-12 11:30:10.756811] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:24.520 [2024-07-12 11:30:10.756865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:24.520 [2024-07-12 11:30:10.757055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (107): Transport endpoint is not connected 00:24:24.520 [2024-07-12 11:30:10.758031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:24:24.520 [2024-07-12 11:30:10.759025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.520 [2024-07-12 11:30:10.759047] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:24.520 [2024-07-12 11:30:10.759062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.520 request: 00:24:24.520 { 00:24:24.520 "name": "TLSTEST", 00:24:24.520 "trtype": "tcp", 00:24:24.520 "traddr": "10.0.0.2", 00:24:24.520 "adrfam": "ipv4", 00:24:24.520 "trsvcid": "4420", 00:24:24.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.520 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:24.520 "prchk_reftag": false, 00:24:24.520 "prchk_guard": false, 00:24:24.520 "hdgst": false, 00:24:24.520 "ddgst": false, 00:24:24.520 "psk": "/tmp/tmp.2OOpA8GfQW", 00:24:24.520 "method": "bdev_nvme_attach_controller", 00:24:24.520 "req_id": 1 00:24:24.520 } 00:24:24.520 Got JSON-RPC error response 00:24:24.520 response: 00:24:24.520 { 00:24:24.520 "code": -5, 00:24:24.520 "message": "Input/output error" 00:24:24.520 } 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 987954 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 987954 ']' 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 987954 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 987954 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 987954' 00:24:24.520 killing process with pid 987954 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 987954 00:24:24.520 Received shutdown signal, test time was about 10.000000 seconds 00:24:24.520 00:24:24.520 Latency(us) 00:24:24.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.520 =================================================================================================================== 00:24:24.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:24.520 [2024-07-12 11:30:10.820667] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:24.520 11:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 987954 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2OOpA8GfQW 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2OOpA8GfQW 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2OOpA8GfQW 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2OOpA8GfQW' 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=988457 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 988457 /var/tmp/bdevperf.sock 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 988457 ']' 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.897 11:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.897 [2024-07-12 11:30:11.929778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:25.897 [2024-07-12 11:30:11.929881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988457 ] 00:24:25.897 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.897 [2024-07-12 11:30:12.028113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.897 [2024-07-12 11:30:12.248553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.464 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.464 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:26.464 11:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2OOpA8GfQW 00:24:26.722 [2024-07-12 11:30:12.854542] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:26.722 [2024-07-12 11:30:12.854663] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:26.722 [2024-07-12 11:30:12.862546] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:26.722 [2024-07-12 11:30:12.862578] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:26.722 [2024-07-12 11:30:12.862630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:26.722 [2024-07-12 11:30:12.862852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (107): Transport endpoint is not connected 00:24:26.722 [2024-07-12 11:30:12.863834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:24:26.722 [2024-07-12 11:30:12.864834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:26.722 [2024-07-12 11:30:12.864855] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:26.722 [2024-07-12 11:30:12.864874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:26.722 request: 00:24:26.722 { 00:24:26.722 "name": "TLSTEST", 00:24:26.722 "trtype": "tcp", 00:24:26.722 "traddr": "10.0.0.2", 00:24:26.722 "adrfam": "ipv4", 00:24:26.722 "trsvcid": "4420", 00:24:26.722 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:26.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.722 "prchk_reftag": false, 00:24:26.722 "prchk_guard": false, 00:24:26.722 "hdgst": false, 00:24:26.722 "ddgst": false, 00:24:26.722 "psk": "/tmp/tmp.2OOpA8GfQW", 00:24:26.722 "method": "bdev_nvme_attach_controller", 00:24:26.722 "req_id": 1 00:24:26.722 } 00:24:26.722 Got JSON-RPC error response 00:24:26.722 response: 00:24:26.722 { 00:24:26.722 "code": -5, 00:24:26.722 "message": "Input/output error" 00:24:26.722 } 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 988457 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 988457 ']' 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 988457 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 988457 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 988457' 00:24:26.722 killing process with pid 988457 00:24:26.722 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 988457 00:24:26.722 Received shutdown signal, test time was about 10.000000 seconds 00:24:26.722 00:24:26.722 Latency(us) 00:24:26.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.722 =================================================================================================================== 00:24:26.722 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:26.723 [2024-07-12 11:30:12.927159] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:26.723 11:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 988457 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=988706 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 988706 /var/tmp/bdevperf.sock 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 988706 ']' 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.659 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.660 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.660 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.660 11:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.918 [2024-07-12 11:30:14.052189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:27.919 [2024-07-12 11:30:14.052277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988706 ] 00:24:27.919 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.919 [2024-07-12 11:30:14.151565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.177 [2024-07-12 11:30:14.380275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:28.744 [2024-07-12 11:30:14.967710] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:28.744 [2024-07-12 11:30:14.969249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032cd80 (9): Bad file descriptor 00:24:28.744 [2024-07-12 11:30:14.970241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.744 [2024-07-12 11:30:14.970265] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:28.744 [2024-07-12 11:30:14.970278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.744 request: 00:24:28.744 { 00:24:28.744 "name": "TLSTEST", 00:24:28.744 "trtype": "tcp", 00:24:28.744 "traddr": "10.0.0.2", 00:24:28.744 "adrfam": "ipv4", 00:24:28.744 "trsvcid": "4420", 00:24:28.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.744 "prchk_reftag": false, 00:24:28.744 "prchk_guard": false, 00:24:28.744 "hdgst": false, 00:24:28.744 "ddgst": false, 00:24:28.744 "method": "bdev_nvme_attach_controller", 00:24:28.744 "req_id": 1 00:24:28.744 } 00:24:28.744 Got JSON-RPC error response 00:24:28.744 response: 00:24:28.744 { 00:24:28.744 "code": -5, 00:24:28.744 "message": "Input/output error" 00:24:28.744 } 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 988706 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 988706 ']' 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 988706 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.744 11:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 988706 00:24:28.744 11:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:28.744 11:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:28.744 11:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 988706' 00:24:28.744 killing process with pid 988706 00:24:28.744 11:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 988706 00:24:28.744 Received shutdown signal, test time was about 10.000000 seconds 00:24:28.744 00:24:28.744 Latency(us) 00:24:28.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.744 =================================================================================================================== 00:24:28.744 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:28.744 11:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 988706 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 982787 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 982787 ']' 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 982787 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 982787 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 982787' 00:24:30.121 killing process with pid 982787 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 982787 00:24:30.121 [2024-07-12 11:30:16.107369] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:30.121 11:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 982787 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.YAhBXty2SN 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.YAhBXty2SN 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=989407 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 989407 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 989407 ']' 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.496 11:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.496 [2024-07-12 11:30:17.705711] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:31.496 [2024-07-12 11:30:17.705797] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.496 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.496 [2024-07-12 11:30:17.813598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.772 [2024-07-12 11:30:18.018867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.772 [2024-07-12 11:30:18.018912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.772 [2024-07-12 11:30:18.018923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.772 [2024-07-12 11:30:18.018934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.772 [2024-07-12 11:30:18.018943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.772 [2024-07-12 11:30:18.018970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.YAhBXty2SN 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YAhBXty2SN 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:32.380 [2024-07-12 11:30:18.661929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.380 11:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:32.637 11:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:32.895 [2024-07-12 11:30:19.002817] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:32.895 [2024-07-12 11:30:19.003063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.895 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:32.895 malloc0 00:24:32.895 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:33.154 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YAhBXty2SN 00:24:33.413 [2024-07-12 11:30:19.540661] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YAhBXty2SN 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YAhBXty2SN' 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=989674 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 989674 /var/tmp/bdevperf.sock 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 989674 ']' 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.413 11:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.413 [2024-07-12 11:30:19.626954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:33.413 [2024-07-12 11:30:19.627046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989674 ] 00:24:33.413 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.413 [2024-07-12 11:30:19.724597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.673 [2024-07-12 11:30:19.941636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.240 11:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.240 11:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:34.241 11:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YAhBXty2SN 00:24:34.241 [2024-07-12 11:30:20.555167] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.241 [2024-07-12 11:30:20.555276] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:34.499 TLSTESTn1 00:24:34.499 11:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:34.499 Running I/O for 10 seconds... 00:24:44.474 00:24:44.474 Latency(us) 00:24:44.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.474 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.474 Verification LBA range: start 0x0 length 0x2000 00:24:44.474 TLSTESTn1 : 10.03 4606.88 18.00 0.00 0.00 27729.84 7038.00 25758.50 00:24:44.474 =================================================================================================================== 00:24:44.474 Total : 4606.88 18.00 0.00 0.00 27729.84 7038.00 25758.50 00:24:44.474 0 00:24:44.474 11:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.474 11:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 989674 00:24:44.474 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 989674 ']' 00:24:44.474 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 989674 00:24:44.474 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:44.474 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.474 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 989674 00:24:44.734 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:44.734 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:44.734 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 989674' 00:24:44.734 killing process with pid 989674 00:24:44.734 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 989674 00:24:44.734 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.734 00:24:44.734 Latency(us) 00:24:44.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.734 =================================================================================================================== 00:24:44.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.734 [2024-07-12 11:30:30.835976] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:44.734 11:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 989674 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.YAhBXty2SN 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YAhBXty2SN 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YAhBXty2SN 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:45.671 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YAhBXty2SN 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YAhBXty2SN' 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=991730 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 991730 /var/tmp/bdevperf.sock 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 991730 ']' 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.672 11:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.672 [2024-07-12 11:30:32.005386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:45.672 [2024-07-12 11:30:32.005501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991730 ] 00:24:45.931 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.931 [2024-07-12 11:30:32.105181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.190 [2024-07-12 11:30:32.325182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.449 11:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.449 11:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:46.449 11:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YAhBXty2SN 00:24:46.708 [2024-07-12 11:30:32.960473] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:46.708 [2024-07-12 11:30:32.960546] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:46.708 [2024-07-12 11:30:32.960557] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.YAhBXty2SN 00:24:46.708 request: 00:24:46.708 { 00:24:46.708 "name": "TLSTEST", 00:24:46.708 "trtype": "tcp", 00:24:46.708 "traddr": "10.0.0.2", 00:24:46.708 "adrfam": "ipv4", 00:24:46.708 "trsvcid": "4420", 00:24:46.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.708 "prchk_reftag": false, 00:24:46.708 "prchk_guard": false, 00:24:46.708 "hdgst": false, 00:24:46.708 "ddgst": false, 00:24:46.708 "psk": "/tmp/tmp.YAhBXty2SN", 00:24:46.708 "method": "bdev_nvme_attach_controller", 00:24:46.708 "req_id": 1 00:24:46.708 } 00:24:46.708 Got JSON-RPC error response 00:24:46.708 response: 00:24:46.708 { 00:24:46.708 "code": -1, 00:24:46.708 "message": "Operation not permitted" 00:24:46.708 } 00:24:46.708 11:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 991730 00:24:46.708 11:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 991730 ']' 00:24:46.708 11:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 991730 00:24:46.708 11:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:46.708 11:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.708 11:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 991730 00:24:46.708 11:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:46.708 11:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:46.708 11:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 991730' 00:24:46.708 killing process with pid 991730 00:24:46.708 11:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 991730 00:24:46.708 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.708 00:24:46.708 Latency(us) 00:24:46.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.708 =================================================================================================================== 00:24:46.708 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:46.709 11:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 991730 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 989407 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 989407 ']' 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 989407 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 989407 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 989407' 00:24:48.085 killing process with pid 989407 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 989407 00:24:48.085 [2024-07-12 11:30:34.119149] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:48.085 11:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 989407 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=992224 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 992224 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 992224 ']' 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.463 11:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.463 [2024-07-12 11:30:35.590771] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:49.463 [2024-07-12 11:30:35.590857] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.463 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.463 [2024-07-12 11:30:35.701147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.722 [2024-07-12 11:30:35.909236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.722 [2024-07-12 11:30:35.909281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.722 [2024-07-12 11:30:35.909293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.722 [2024-07-12 11:30:35.909304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.722 [2024-07-12 11:30:35.909313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.722 [2024-07-12 11:30:35.909339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.YAhBXty2SN 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.YAhBXty2SN 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.YAhBXty2SN 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YAhBXty2SN 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:50.291 [2024-07-12 11:30:36.562887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.291 11:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:50.550 11:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:50.809 [2024-07-12 11:30:36.911813] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.809 [2024-07-12 11:30:36.912048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.809 11:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:50.809 malloc0 00:24:50.809 11:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:51.067 11:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YAhBXty2SN 00:24:51.326 [2024-07-12 11:30:37.477249] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:51.326 [2024-07-12 11:30:37.477287] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:51.326 [2024-07-12 11:30:37.477327] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:51.326 request: 00:24:51.326 { 00:24:51.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.326 "host": "nqn.2016-06.io.spdk:host1", 00:24:51.326 "psk": "/tmp/tmp.YAhBXty2SN", 00:24:51.326 "method": "nvmf_subsystem_add_host", 00:24:51.327 "req_id": 1 00:24:51.327 } 00:24:51.327 Got JSON-RPC error response 00:24:51.327 response: 00:24:51.327 { 00:24:51.327 "code": -32603, 00:24:51.327 "message": "Internal error" 00:24:51.327 } 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 992224 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 992224 ']' 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 992224 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 992224 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 992224' 00:24:51.327 killing process with pid 992224 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 992224 00:24:51.327 11:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 992224 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.YAhBXty2SN 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=992926 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 992926 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 992926 ']' 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.703 11:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.703 [2024-07-12 11:30:39.000293] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:52.703 [2024-07-12 11:30:39.000393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.703 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.962 [2024-07-12 11:30:39.106886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.962 [2024-07-12 11:30:39.316216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.962 [2024-07-12 11:30:39.316263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.962 [2024-07-12 11:30:39.316277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.962 [2024-07-12 11:30:39.316289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.962 [2024-07-12 11:30:39.316299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.962 [2024-07-12 11:30:39.316327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.528 11:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.528 11:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:53.528 11:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.528 11:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.528 11:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.528 11:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.529 11:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.YAhBXty2SN 00:24:53.529 11:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YAhBXty2SN 00:24:53.529 11:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:53.787 [2024-07-12 11:30:39.953642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.787 11:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:54.046 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:54.046 [2024-07-12 11:30:40.302574] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.046 [2024-07-12 11:30:40.302841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.046 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:54.305 malloc0 00:24:54.305 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YAhBXty2SN 00:24:54.564 [2024-07-12 11:30:40.855839] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=993185 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 993185 /var/tmp/bdevperf.sock 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 993185 ']' 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.564 11:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.822 [2024-07-12 11:30:40.930780] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:54.823 [2024-07-12 11:30:40.930869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993185 ] 00:24:54.823 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.823 [2024-07-12 11:30:41.029696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.081 [2024-07-12 11:30:41.252939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.649 11:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.649 11:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:55.649 11:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YAhBXty2SN 00:24:55.649 [2024-07-12 11:30:41.870811] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.649 [2024-07-12 11:30:41.870913] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:55.649 TLSTESTn1 00:24:55.650 11:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:55.908 11:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:55.908 "subsystems": [ 00:24:55.908 { 00:24:55.908 "subsystem": "keyring", 00:24:55.908 "config": [] 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "subsystem": "iobuf", 00:24:55.908 "config": [ 00:24:55.908 { 00:24:55.908 "method": "iobuf_set_options", 00:24:55.908 "params": { 00:24:55.908 "small_pool_count": 8192, 00:24:55.908 "large_pool_count": 1024, 00:24:55.908 "small_bufsize": 8192, 00:24:55.908 "large_bufsize": 135168 00:24:55.908 } 00:24:55.908 } 00:24:55.908 ] 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "subsystem": "sock", 00:24:55.908 "config": [ 00:24:55.908 { 00:24:55.908 "method": "sock_set_default_impl", 00:24:55.908 "params": { 00:24:55.908 "impl_name": "posix" 00:24:55.908 } 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "method": "sock_impl_set_options", 00:24:55.908 "params": { 00:24:55.908 "impl_name": "ssl", 00:24:55.908 "recv_buf_size": 4096, 00:24:55.908 "send_buf_size": 4096, 00:24:55.908 "enable_recv_pipe": true, 00:24:55.908 "enable_quickack": false, 00:24:55.908 "enable_placement_id": 0, 00:24:55.908 "enable_zerocopy_send_server": true, 00:24:55.908 "enable_zerocopy_send_client": false, 00:24:55.908 "zerocopy_threshold": 0, 00:24:55.908 "tls_version": 0, 00:24:55.908 "enable_ktls": false 00:24:55.908 } 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "method": "sock_impl_set_options", 00:24:55.908 "params": { 00:24:55.908 "impl_name": "posix", 00:24:55.908 "recv_buf_size": 2097152, 00:24:55.908 "send_buf_size": 2097152, 00:24:55.908 "enable_recv_pipe": true, 00:24:55.908 "enable_quickack": false, 00:24:55.908 "enable_placement_id": 0, 00:24:55.908 "enable_zerocopy_send_server": true, 00:24:55.908 "enable_zerocopy_send_client": false, 00:24:55.908 "zerocopy_threshold": 0, 00:24:55.908 "tls_version": 0, 00:24:55.908 "enable_ktls": false 00:24:55.908 } 00:24:55.908 } 00:24:55.908 ] 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "subsystem": "vmd", 00:24:55.908 "config": [] 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "subsystem": "accel", 00:24:55.908 "config": [ 00:24:55.908 { 00:24:55.908 "method": "accel_set_options", 00:24:55.908 "params": { 00:24:55.908 "small_cache_size": 128, 00:24:55.908 "large_cache_size": 16, 00:24:55.908 "task_count": 2048, 00:24:55.908 "sequence_count": 2048, 00:24:55.908 "buf_count": 2048 00:24:55.908 } 00:24:55.908 } 00:24:55.908 ] 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "subsystem": "bdev", 00:24:55.908 "config": [ 00:24:55.908 { 00:24:55.908 "method": "bdev_set_options", 00:24:55.908 "params": { 00:24:55.908 "bdev_io_pool_size": 65535, 00:24:55.908 "bdev_io_cache_size": 256, 00:24:55.908 "bdev_auto_examine": true, 00:24:55.908 "iobuf_small_cache_size": 128, 00:24:55.908 "iobuf_large_cache_size": 16 00:24:55.908 } 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "method": "bdev_raid_set_options", 00:24:55.908 "params": { 00:24:55.908 "process_window_size_kb": 1024 00:24:55.908 } 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "method": "bdev_iscsi_set_options", 00:24:55.908 "params": { 00:24:55.908 "timeout_sec": 30 00:24:55.908 } 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "method": "bdev_nvme_set_options", 00:24:55.908 "params": { 00:24:55.908 "action_on_timeout": "none", 00:24:55.908 "timeout_us": 0, 00:24:55.908 "timeout_admin_us": 0, 00:24:55.908 "keep_alive_timeout_ms": 10000, 00:24:55.908 "arbitration_burst": 0, 00:24:55.908 "low_priority_weight": 0, 00:24:55.908 "medium_priority_weight": 0, 00:24:55.908 "high_priority_weight": 0, 00:24:55.908 "nvme_adminq_poll_period_us": 10000, 00:24:55.908 "nvme_ioq_poll_period_us": 0, 00:24:55.908 "io_queue_requests": 0, 00:24:55.908 "delay_cmd_submit": true, 00:24:55.908 "transport_retry_count": 4, 00:24:55.908 "bdev_retry_count": 3, 00:24:55.908 "transport_ack_timeout": 0, 00:24:55.908 "ctrlr_loss_timeout_sec": 0, 00:24:55.908 "reconnect_delay_sec": 0, 00:24:55.908 "fast_io_fail_timeout_sec": 0, 00:24:55.908 "disable_auto_failback": false, 00:24:55.908 "generate_uuids": false, 00:24:55.908 "transport_tos": 0, 00:24:55.908 "nvme_error_stat": false, 00:24:55.908 "rdma_srq_size": 0, 00:24:55.908 "io_path_stat": false, 00:24:55.908 "allow_accel_sequence": false, 00:24:55.908 "rdma_max_cq_size": 0, 00:24:55.908 "rdma_cm_event_timeout_ms": 0, 00:24:55.908 "dhchap_digests": [ 00:24:55.908 "sha256", 00:24:55.908 "sha384", 00:24:55.908 "sha512" 00:24:55.908 ], 00:24:55.908 "dhchap_dhgroups": [ 00:24:55.908 "null", 00:24:55.908 "ffdhe2048", 00:24:55.908 "ffdhe3072", 00:24:55.908 "ffdhe4096", 00:24:55.908 "ffdhe6144", 00:24:55.908 "ffdhe8192" 00:24:55.908 ] 00:24:55.908 } 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "method": "bdev_nvme_set_hotplug", 00:24:55.908 "params": { 00:24:55.908 "period_us": 100000, 00:24:55.908 "enable": false 00:24:55.908 } 00:24:55.908 }, 00:24:55.908 { 00:24:55.908 "method": "bdev_malloc_create", 00:24:55.908 "params": { 00:24:55.908 "name": "malloc0", 00:24:55.908 "num_blocks": 8192, 00:24:55.908 "block_size": 4096, 00:24:55.908 "physical_block_size": 4096, 00:24:55.908 "uuid": "9ed07552-2639-4428-b8d9-5ac084d6ac2b", 00:24:55.908 "optimal_io_boundary": 0 00:24:55.908 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "bdev_wait_for_examine" 00:24:55.909 } 00:24:55.909 ] 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "subsystem": "nbd", 00:24:55.909 "config": [] 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "subsystem": "scheduler", 00:24:55.909 "config": [ 00:24:55.909 { 00:24:55.909 "method": "framework_set_scheduler", 00:24:55.909 "params": { 00:24:55.909 "name": "static" 00:24:55.909 } 00:24:55.909 } 00:24:55.909 ] 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "subsystem": "nvmf", 00:24:55.909 "config": [ 00:24:55.909 { 00:24:55.909 "method": "nvmf_set_config", 00:24:55.909 "params": { 00:24:55.909 "discovery_filter": "match_any", 00:24:55.909 "admin_cmd_passthru": { 00:24:55.909 "identify_ctrlr": false 00:24:55.909 } 00:24:55.909 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "nvmf_set_max_subsystems", 00:24:55.909 "params": { 00:24:55.909 "max_subsystems": 1024 00:24:55.909 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "nvmf_set_crdt", 00:24:55.909 "params": { 00:24:55.909 "crdt1": 0, 00:24:55.909 "crdt2": 0, 00:24:55.909 "crdt3": 0 00:24:55.909 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "nvmf_create_transport", 00:24:55.909 "params": { 00:24:55.909 "trtype": "TCP", 00:24:55.909 "max_queue_depth": 128, 00:24:55.909 "max_io_qpairs_per_ctrlr": 127, 00:24:55.909 "in_capsule_data_size": 4096, 00:24:55.909 "max_io_size": 131072, 00:24:55.909 "io_unit_size": 131072, 00:24:55.909 "max_aq_depth": 128, 00:24:55.909 "num_shared_buffers": 511, 00:24:55.909 "buf_cache_size": 4294967295, 00:24:55.909 "dif_insert_or_strip": false, 00:24:55.909 "zcopy": false, 00:24:55.909 "c2h_success": false, 00:24:55.909 "sock_priority": 0, 00:24:55.909 "abort_timeout_sec": 1, 00:24:55.909 "ack_timeout": 0, 00:24:55.909 "data_wr_pool_size": 0 00:24:55.909 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "nvmf_create_subsystem", 00:24:55.909 "params": { 00:24:55.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.909 "allow_any_host": false, 00:24:55.909 "serial_number": "SPDK00000000000001", 00:24:55.909 "model_number": "SPDK bdev Controller", 00:24:55.909 "max_namespaces": 10, 00:24:55.909 "min_cntlid": 1, 00:24:55.909 "max_cntlid": 65519, 00:24:55.909 "ana_reporting": false 00:24:55.909 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "nvmf_subsystem_add_host", 00:24:55.909 "params": { 00:24:55.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.909 "host": "nqn.2016-06.io.spdk:host1", 00:24:55.909 "psk": "/tmp/tmp.YAhBXty2SN" 00:24:55.909 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "nvmf_subsystem_add_ns", 00:24:55.909 "params": { 00:24:55.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.909 "namespace": { 00:24:55.909 "nsid": 1, 00:24:55.909 "bdev_name": "malloc0", 00:24:55.909 "nguid": "9ED0755226394428B8D95AC084D6AC2B", 00:24:55.909 "uuid": "9ed07552-2639-4428-b8d9-5ac084d6ac2b", 00:24:55.909 "no_auto_visible": false 00:24:55.909 } 00:24:55.909 } 00:24:55.909 }, 00:24:55.909 { 00:24:55.909 "method": "nvmf_subsystem_add_listener", 00:24:55.909 "params": { 00:24:55.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.909 "listen_address": { 00:24:55.909 "trtype": "TCP", 00:24:55.909 "adrfam": "IPv4", 00:24:55.909 "traddr": "10.0.0.2", 00:24:55.909 "trsvcid": "4420" 00:24:55.909 }, 00:24:55.909 "secure_channel": true 00:24:55.909 } 00:24:55.909 } 00:24:55.909 ] 00:24:55.909 } 00:24:55.909 ] 00:24:55.909 }' 00:24:55.909 11:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:56.167 11:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:56.167 "subsystems": [ 00:24:56.167 { 00:24:56.167 "subsystem": "keyring", 00:24:56.167 "config": [] 00:24:56.167 }, 00:24:56.167 { 00:24:56.167 "subsystem": "iobuf", 00:24:56.167 "config": [ 00:24:56.167 { 00:24:56.167 "method": "iobuf_set_options", 00:24:56.167 "params": { 00:24:56.167 "small_pool_count": 8192, 00:24:56.167 "large_pool_count": 1024, 00:24:56.167 "small_bufsize": 8192, 00:24:56.167 "large_bufsize": 135168 00:24:56.167 } 00:24:56.167 } 00:24:56.167 ] 00:24:56.167 }, 00:24:56.167 { 00:24:56.167 "subsystem": "sock", 00:24:56.167 "config": [ 00:24:56.167 { 00:24:56.167 "method": "sock_set_default_impl", 00:24:56.167 "params": { 00:24:56.167 "impl_name": "posix" 00:24:56.167 } 00:24:56.167 }, 00:24:56.167 { 00:24:56.167 "method": "sock_impl_set_options", 00:24:56.167 "params": { 00:24:56.167 "impl_name": "ssl", 00:24:56.167 "recv_buf_size": 4096, 00:24:56.167 "send_buf_size": 4096, 00:24:56.167 "enable_recv_pipe": true, 00:24:56.167 "enable_quickack": false, 00:24:56.167 "enable_placement_id": 0, 00:24:56.167 "enable_zerocopy_send_server": true, 00:24:56.168 "enable_zerocopy_send_client": false, 00:24:56.168 "zerocopy_threshold": 0, 00:24:56.168 "tls_version": 0, 00:24:56.168 "enable_ktls": false 00:24:56.168 } 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "method": "sock_impl_set_options", 00:24:56.168 "params": { 00:24:56.168 "impl_name": "posix", 00:24:56.168 "recv_buf_size": 2097152, 00:24:56.168 "send_buf_size": 2097152, 00:24:56.168 "enable_recv_pipe": true, 00:24:56.168 "enable_quickack": false, 00:24:56.168 "enable_placement_id": 0, 00:24:56.168 "enable_zerocopy_send_server": true, 00:24:56.168 "enable_zerocopy_send_client": false, 00:24:56.168 "zerocopy_threshold": 0, 00:24:56.168 "tls_version": 0, 00:24:56.168 "enable_ktls": false 00:24:56.168 } 00:24:56.168 } 00:24:56.168 ] 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "subsystem": "vmd", 00:24:56.168 "config": [] 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "subsystem": "accel", 00:24:56.168 "config": [ 00:24:56.168 { 00:24:56.168 "method": "accel_set_options", 00:24:56.168 "params": { 00:24:56.168 "small_cache_size": 128, 00:24:56.168 "large_cache_size": 16, 00:24:56.168 "task_count": 2048, 00:24:56.168 "sequence_count": 2048, 00:24:56.168 "buf_count": 2048 00:24:56.168 } 00:24:56.168 } 00:24:56.168 ] 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "subsystem": "bdev", 00:24:56.168 "config": [ 00:24:56.168 { 00:24:56.168 "method": "bdev_set_options", 00:24:56.168 "params": { 00:24:56.168 "bdev_io_pool_size": 65535, 00:24:56.168 "bdev_io_cache_size": 256, 00:24:56.168 "bdev_auto_examine": true, 00:24:56.168 "iobuf_small_cache_size": 128, 00:24:56.168 "iobuf_large_cache_size": 16 00:24:56.168 } 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "method": "bdev_raid_set_options", 00:24:56.168 "params": { 00:24:56.168 "process_window_size_kb": 1024 00:24:56.168 } 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "method": "bdev_iscsi_set_options", 00:24:56.168 "params": { 00:24:56.168 "timeout_sec": 30 00:24:56.168 } 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "method": "bdev_nvme_set_options", 00:24:56.168 "params": { 00:24:56.168 "action_on_timeout": "none", 00:24:56.168 "timeout_us": 0, 00:24:56.168 "timeout_admin_us": 0, 00:24:56.168 "keep_alive_timeout_ms": 10000, 00:24:56.168 "arbitration_burst": 0, 00:24:56.168 "low_priority_weight": 0, 00:24:56.168 "medium_priority_weight": 0, 00:24:56.168 "high_priority_weight": 0, 00:24:56.168 "nvme_adminq_poll_period_us": 10000, 00:24:56.168 "nvme_ioq_poll_period_us": 0, 00:24:56.168 "io_queue_requests": 512, 00:24:56.168 "delay_cmd_submit": true, 00:24:56.168 "transport_retry_count": 4, 00:24:56.168 "bdev_retry_count": 3, 00:24:56.168 "transport_ack_timeout": 0, 00:24:56.168 "ctrlr_loss_timeout_sec": 0, 00:24:56.168 "reconnect_delay_sec": 0, 00:24:56.168 "fast_io_fail_timeout_sec": 0, 00:24:56.168 "disable_auto_failback": false, 00:24:56.168 "generate_uuids": false, 00:24:56.168 "transport_tos": 0, 00:24:56.168 "nvme_error_stat": false, 00:24:56.168 "rdma_srq_size": 0, 00:24:56.168 "io_path_stat": false, 00:24:56.168 "allow_accel_sequence": false, 00:24:56.168 "rdma_max_cq_size": 0, 00:24:56.168 "rdma_cm_event_timeout_ms": 0, 00:24:56.168 "dhchap_digests": [ 00:24:56.168 "sha256", 00:24:56.168 "sha384", 00:24:56.168 "sha512" 00:24:56.168 ], 00:24:56.168 "dhchap_dhgroups": [ 00:24:56.168 "null", 00:24:56.168 "ffdhe2048", 00:24:56.168 "ffdhe3072", 00:24:56.168 "ffdhe4096", 00:24:56.168 "ffdhe6144", 00:24:56.168 "ffdhe8192" 00:24:56.168 ] 00:24:56.168 } 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "method": "bdev_nvme_attach_controller", 00:24:56.168 "params": { 00:24:56.168 "name": "TLSTEST", 00:24:56.168 "trtype": "TCP", 00:24:56.168 "adrfam": "IPv4", 00:24:56.168 "traddr": "10.0.0.2", 00:24:56.168 "trsvcid": "4420", 00:24:56.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.168 "prchk_reftag": false, 00:24:56.168 "prchk_guard": false, 00:24:56.168 "ctrlr_loss_timeout_sec": 0, 00:24:56.168 "reconnect_delay_sec": 0, 00:24:56.168 "fast_io_fail_timeout_sec": 0, 00:24:56.168 "psk": "/tmp/tmp.YAhBXty2SN", 00:24:56.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.168 "hdgst": false, 00:24:56.168 "ddgst": false 00:24:56.168 } 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "method": "bdev_nvme_set_hotplug", 00:24:56.168 "params": { 00:24:56.168 "period_us": 100000, 00:24:56.168 "enable": false 00:24:56.168 } 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "method": "bdev_wait_for_examine" 00:24:56.168 } 00:24:56.168 ] 00:24:56.168 }, 00:24:56.168 { 00:24:56.168 "subsystem": "nbd", 00:24:56.168 "config": [] 00:24:56.168 } 00:24:56.168 ] 00:24:56.168 }' 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 993185 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 993185 ']' 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 993185 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 993185 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 993185' 00:24:56.168 killing process with pid 993185 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 993185 00:24:56.168 Received shutdown signal, test time was about 10.000000 seconds 00:24:56.168 00:24:56.168 Latency(us) 00:24:56.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.168 =================================================================================================================== 00:24:56.168 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:56.168 [2024-07-12 11:30:42.495441] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:56.168 11:30:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 993185 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 992926 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 992926 ']' 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 992926 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 992926 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 992926' 00:24:57.561 killing process with pid 992926 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 992926 00:24:57.561 [2024-07-12 11:30:43.601745] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:57.561 11:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 992926 00:24:58.939 11:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:58.939 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.939 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.939 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.939 11:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:58.939 "subsystems": [ 00:24:58.939 { 00:24:58.939 "subsystem": "keyring", 00:24:58.939 "config": [] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "iobuf", 00:24:58.939 "config": [ 00:24:58.939 { 00:24:58.939 "method": "iobuf_set_options", 00:24:58.939 "params": { 00:24:58.939 "small_pool_count": 8192, 00:24:58.939 "large_pool_count": 1024, 00:24:58.939 "small_bufsize": 8192, 00:24:58.939 "large_bufsize": 135168 00:24:58.939 } 00:24:58.939 } 00:24:58.939 ] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "sock", 00:24:58.939 "config": [ 00:24:58.939 { 00:24:58.939 "method": "sock_set_default_impl", 00:24:58.939 "params": { 00:24:58.939 "impl_name": "posix" 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "sock_impl_set_options", 00:24:58.939 "params": { 00:24:58.939 "impl_name": "ssl", 00:24:58.939 "recv_buf_size": 4096, 00:24:58.939 "send_buf_size": 4096, 00:24:58.939 "enable_recv_pipe": true, 00:24:58.939 "enable_quickack": false, 00:24:58.939 "enable_placement_id": 0, 00:24:58.939 "enable_zerocopy_send_server": true, 00:24:58.939 "enable_zerocopy_send_client": false, 00:24:58.939 "zerocopy_threshold": 0, 00:24:58.939 "tls_version": 0, 00:24:58.939 "enable_ktls": false 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "sock_impl_set_options", 00:24:58.939 "params": { 00:24:58.939 "impl_name": "posix", 00:24:58.939 "recv_buf_size": 2097152, 00:24:58.939 "send_buf_size": 2097152, 00:24:58.939 "enable_recv_pipe": true, 00:24:58.939 "enable_quickack": false, 00:24:58.939 "enable_placement_id": 0, 00:24:58.939 "enable_zerocopy_send_server": true, 00:24:58.939 "enable_zerocopy_send_client": false, 00:24:58.939 "zerocopy_threshold": 0, 00:24:58.939 "tls_version": 0, 00:24:58.939 "enable_ktls": false 00:24:58.939 } 00:24:58.939 } 00:24:58.939 ] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "vmd", 00:24:58.939 "config": [] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "accel", 00:24:58.939 "config": [ 00:24:58.939 { 00:24:58.939 "method": "accel_set_options", 00:24:58.939 "params": { 00:24:58.939 "small_cache_size": 128, 00:24:58.939 "large_cache_size": 16, 00:24:58.939 "task_count": 2048, 00:24:58.939 "sequence_count": 2048, 00:24:58.939 "buf_count": 2048 00:24:58.939 } 00:24:58.939 } 00:24:58.939 ] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "bdev", 00:24:58.939 "config": [ 00:24:58.939 { 00:24:58.939 "method": "bdev_set_options", 00:24:58.939 "params": { 00:24:58.939 "bdev_io_pool_size": 65535, 00:24:58.939 "bdev_io_cache_size": 256, 00:24:58.939 "bdev_auto_examine": true, 00:24:58.939 "iobuf_small_cache_size": 128, 00:24:58.939 "iobuf_large_cache_size": 16 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "bdev_raid_set_options", 00:24:58.939 "params": { 00:24:58.939 "process_window_size_kb": 1024 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "bdev_iscsi_set_options", 00:24:58.939 "params": { 00:24:58.939 "timeout_sec": 30 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "bdev_nvme_set_options", 00:24:58.939 "params": { 00:24:58.939 "action_on_timeout": "none", 00:24:58.939 "timeout_us": 0, 00:24:58.939 "timeout_admin_us": 0, 00:24:58.939 "keep_alive_timeout_ms": 10000, 00:24:58.939 "arbitration_burst": 0, 00:24:58.939 "low_priority_weight": 0, 00:24:58.939 "medium_priority_weight": 0, 00:24:58.939 "high_priority_weight": 0, 00:24:58.939 "nvme_adminq_poll_period_us": 10000, 00:24:58.939 "nvme_ioq_poll_period_us": 0, 00:24:58.939 "io_queue_requests": 0, 00:24:58.939 "delay_cmd_submit": true, 00:24:58.939 "transport_retry_count": 4, 00:24:58.939 "bdev_retry_count": 3, 00:24:58.939 "transport_ack_timeout": 0, 00:24:58.939 "ctrlr_loss_timeout_sec": 0, 00:24:58.939 "reconnect_delay_sec": 0, 00:24:58.939 "fast_io_fail_timeout_sec": 0, 00:24:58.939 "disable_auto_failback": false, 00:24:58.939 "generate_uuids": false, 00:24:58.939 "transport_tos": 0, 00:24:58.939 "nvme_error_stat": false, 00:24:58.939 "rdma_srq_size": 0, 00:24:58.939 "io_path_stat": false, 00:24:58.939 "allow_accel_sequence": false, 00:24:58.939 "rdma_max_cq_size": 0, 00:24:58.939 "rdma_cm_event_timeout_ms": 0, 00:24:58.939 "dhchap_digests": [ 00:24:58.939 "sha256", 00:24:58.939 "sha384", 00:24:58.939 "sha512" 00:24:58.939 ], 00:24:58.939 "dhchap_dhgroups": [ 00:24:58.939 "null", 00:24:58.939 "ffdhe2048", 00:24:58.939 "ffdhe3072", 00:24:58.939 "ffdhe4096", 00:24:58.939 "ffdhe6144", 00:24:58.939 "ffdhe8192" 00:24:58.939 ] 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "bdev_nvme_set_hotplug", 00:24:58.939 "params": { 00:24:58.939 "period_us": 100000, 00:24:58.939 "enable": false 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "bdev_malloc_create", 00:24:58.939 "params": { 00:24:58.939 "name": "malloc0", 00:24:58.939 "num_blocks": 8192, 00:24:58.939 "block_size": 4096, 00:24:58.939 "physical_block_size": 4096, 00:24:58.939 "uuid": "9ed07552-2639-4428-b8d9-5ac084d6ac2b", 00:24:58.939 "optimal_io_boundary": 0 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "bdev_wait_for_examine" 00:24:58.939 } 00:24:58.939 ] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "nbd", 00:24:58.939 "config": [] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "scheduler", 00:24:58.939 "config": [ 00:24:58.939 { 00:24:58.939 "method": "framework_set_scheduler", 00:24:58.939 "params": { 00:24:58.939 "name": "static" 00:24:58.939 } 00:24:58.939 } 00:24:58.939 ] 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "subsystem": "nvmf", 00:24:58.939 "config": [ 00:24:58.939 { 00:24:58.939 "method": "nvmf_set_config", 00:24:58.939 "params": { 00:24:58.939 "discovery_filter": "match_any", 00:24:58.939 "admin_cmd_passthru": { 00:24:58.939 "identify_ctrlr": false 00:24:58.939 } 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "nvmf_set_max_subsystems", 00:24:58.939 "params": { 00:24:58.939 "max_subsystems": 1024 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "nvmf_set_crdt", 00:24:58.939 "params": { 00:24:58.939 "crdt1": 0, 00:24:58.939 "crdt2": 0, 00:24:58.939 "crdt3": 0 00:24:58.939 } 00:24:58.939 }, 00:24:58.939 { 00:24:58.939 "method": "nvmf_create_transport", 00:24:58.939 "params": { 00:24:58.939 "trtype": "TCP", 00:24:58.939 "max_queue_depth": 128, 00:24:58.939 "max_io_qpairs_per_ctrlr": 127, 00:24:58.939 "in_capsule_data_size": 4096, 00:24:58.939 "max_io_size": 131072, 00:24:58.939 "io_unit_size": 131072, 00:24:58.939 "max_aq_depth": 128, 00:24:58.939 "num_shared_buffers": 511, 00:24:58.939 "buf_cache_size": 4294967295, 00:24:58.940 "dif_insert_or_strip": false, 00:24:58.940 "zcopy": false, 00:24:58.940 "c2h_success": false, 00:24:58.940 "sock_priority": 0, 00:24:58.940 "abort_timeout_sec": 1, 00:24:58.940 "ack_timeout": 0, 00:24:58.940 "data_wr_pool_size": 0 00:24:58.940 } 00:24:58.940 }, 00:24:58.940 { 00:24:58.940 "method": "nvmf_create_subsystem", 00:24:58.940 "params": { 00:24:58.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.940 "allow_any_host": false, 00:24:58.940 "serial_number": "SPDK00000000000001", 00:24:58.940 "model_number": "SPDK bdev Controller", 00:24:58.940 "max_namespaces": 10, 00:24:58.940 "min_cntlid": 1, 00:24:58.940 "max_cntlid": 65519, 00:24:58.940 "ana_reporting": false 00:24:58.940 } 00:24:58.940 }, 00:24:58.940 { 00:24:58.940 "method": "nvmf_subsystem_add_host", 00:24:58.940 "params": { 00:24:58.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.940 "host": "nqn.2016-06.io.spdk:host1", 00:24:58.940 "psk": "/tmp/tmp.YAhBXty2SN" 00:24:58.940 } 00:24:58.940 }, 00:24:58.940 { 00:24:58.940 "method": "nvmf_subsystem_add_ns", 00:24:58.940 "params": { 00:24:58.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.940 "namespace": { 00:24:58.940 "nsid": 1, 00:24:58.940 "bdev_name": "malloc0", 00:24:58.940 "nguid": "9ED0755226394428B8D95AC084D6AC2B", 00:24:58.940 "uuid": "9ed07552-2639-4428-b8d9-5ac084d6ac2b", 00:24:58.940 "no_auto_visible": false 00:24:58.940 } 00:24:58.940 } 00:24:58.940 }, 00:24:58.940 { 00:24:58.940 "method": "nvmf_subsystem_add_listener", 00:24:58.940 "params": { 00:24:58.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.940 "listen_address": { 00:24:58.940 "trtype": "TCP", 00:24:58.940 "adrfam": "IPv4", 00:24:58.940 "traddr": "10.0.0.2", 00:24:58.940 "trsvcid": "4420" 00:24:58.940 }, 00:24:58.940 "secure_channel": true 00:24:58.940 } 00:24:58.940 } 00:24:58.940 ] 00:24:58.940 } 00:24:58.940 ] 00:24:58.940 }' 00:24:58.940 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=993892 00:24:58.940 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:58.940 11:30:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 993892 00:24:58.940 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 993892 ']' 00:24:58.940 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.940 11:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.940 11:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.940 11:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.940 11:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.940 [2024-07-12 11:30:45.077185] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:58.940 [2024-07-12 11:30:45.077290] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.940 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.940 [2024-07-12 11:30:45.187548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.198 [2024-07-12 11:30:45.400470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.198 [2024-07-12 11:30:45.400517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.198 [2024-07-12 11:30:45.400529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.198 [2024-07-12 11:30:45.400541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.198 [2024-07-12 11:30:45.400561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.198 [2024-07-12 11:30:45.400654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.765 [2024-07-12 11:30:45.940368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.765 [2024-07-12 11:30:45.956340] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:59.765 [2024-07-12 11:30:45.972392] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:59.765 [2024-07-12 11:30:45.972599] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.765 11:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.765 11:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:59.765 11:30:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.765 11:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.765 11:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=994134 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 994134 /var/tmp/bdevperf.sock 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 994134 ']' 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.765 11:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:59.765 "subsystems": [ 00:24:59.765 { 00:24:59.765 "subsystem": "keyring", 00:24:59.765 "config": [] 00:24:59.765 }, 00:24:59.765 { 00:24:59.765 "subsystem": "iobuf", 00:24:59.765 "config": [ 00:24:59.765 { 00:24:59.765 "method": "iobuf_set_options", 00:24:59.765 "params": { 00:24:59.765 "small_pool_count": 8192, 00:24:59.765 "large_pool_count": 1024, 00:24:59.765 "small_bufsize": 8192, 00:24:59.765 "large_bufsize": 135168 00:24:59.765 } 00:24:59.765 } 00:24:59.765 ] 00:24:59.765 }, 00:24:59.765 { 00:24:59.765 "subsystem": "sock", 00:24:59.765 "config": [ 00:24:59.765 { 00:24:59.765 "method": "sock_set_default_impl", 00:24:59.765 "params": { 00:24:59.765 "impl_name": "posix" 00:24:59.765 } 00:24:59.765 }, 00:24:59.765 { 00:24:59.765 "method": "sock_impl_set_options", 00:24:59.765 "params": { 00:24:59.765 "impl_name": "ssl", 00:24:59.765 "recv_buf_size": 4096, 00:24:59.765 "send_buf_size": 4096, 00:24:59.765 "enable_recv_pipe": true, 00:24:59.765 "enable_quickack": false, 00:24:59.765 "enable_placement_id": 0, 00:24:59.765 "enable_zerocopy_send_server": true, 00:24:59.765 "enable_zerocopy_send_client": false, 00:24:59.765 "zerocopy_threshold": 0, 00:24:59.765 "tls_version": 0, 00:24:59.765 "enable_ktls": false 00:24:59.765 } 00:24:59.765 }, 00:24:59.765 { 00:24:59.765 "method": "sock_impl_set_options", 00:24:59.765 "params": { 00:24:59.765 "impl_name": "posix", 00:24:59.765 "recv_buf_size": 2097152, 00:24:59.765 "send_buf_size": 2097152, 00:24:59.765 "enable_recv_pipe": true, 00:24:59.765 "enable_quickack": false, 00:24:59.765 "enable_placement_id": 0, 00:24:59.765 "enable_zerocopy_send_server": true, 00:24:59.765 "enable_zerocopy_send_client": false, 00:24:59.765 "zerocopy_threshold": 0, 00:24:59.765 "tls_version": 0, 00:24:59.765 "enable_ktls": false 00:24:59.765 } 00:24:59.765 } 00:24:59.765 ] 00:24:59.765 }, 00:24:59.765 { 00:24:59.765 "subsystem": "vmd", 00:24:59.765 "config": [] 00:24:59.765 }, 00:24:59.765 { 00:24:59.765 "subsystem": "accel", 00:24:59.765 "config": [ 00:24:59.765 { 00:24:59.765 "method": "accel_set_options", 00:24:59.765 "params": { 00:24:59.765 "small_cache_size": 128, 00:24:59.765 "large_cache_size": 16, 00:24:59.765 "task_count": 2048, 00:24:59.765 "sequence_count": 2048, 00:24:59.765 "buf_count": 2048 00:24:59.766 } 00:24:59.766 } 00:24:59.766 ] 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "subsystem": "bdev", 00:24:59.766 "config": [ 00:24:59.766 { 00:24:59.766 "method": "bdev_set_options", 00:24:59.766 "params": { 00:24:59.766 "bdev_io_pool_size": 65535, 00:24:59.766 "bdev_io_cache_size": 256, 00:24:59.766 "bdev_auto_examine": true, 00:24:59.766 "iobuf_small_cache_size": 128, 00:24:59.766 "iobuf_large_cache_size": 16 00:24:59.766 } 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "method": "bdev_raid_set_options", 00:24:59.766 "params": { 00:24:59.766 "process_window_size_kb": 1024 00:24:59.766 } 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "method": "bdev_iscsi_set_options", 00:24:59.766 "params": { 00:24:59.766 "timeout_sec": 30 00:24:59.766 } 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "method": "bdev_nvme_set_options", 00:24:59.766 "params": { 00:24:59.766 "action_on_timeout": "none", 00:24:59.766 "timeout_us": 0, 00:24:59.766 "timeout_admin_us": 0, 00:24:59.766 "keep_alive_timeout_ms": 10000, 00:24:59.766 "arbitration_burst": 0, 00:24:59.766 "low_priority_weight": 0, 00:24:59.766 "medium_priority_weight": 0, 00:24:59.766 "high_priority_weight": 0, 00:24:59.766 "nvme_adminq_poll_period_us": 10000, 00:24:59.766 "nvme_ioq_poll_period_us": 0, 00:24:59.766 "io_queue_requests": 512, 00:24:59.766 "delay_cmd_submit": true, 00:24:59.766 "transport_retry_count": 4, 00:24:59.766 "bdev_retry_count": 3, 00:24:59.766 "transport_ack_timeout": 0, 00:24:59.766 "ctrlr_loss_timeout_sec": 0, 00:24:59.766 "reconnect_delay_sec": 0, 00:24:59.766 "fast_io_fail_timeout_sec": 0, 00:24:59.766 "disable_auto_failback": false, 00:24:59.766 "generate_uuids": false, 00:24:59.766 "transport_tos": 0, 00:24:59.766 "nvme_error_stat": false, 00:24:59.766 "rdma_srq_size": 0, 00:24:59.766 "io_path_stat": false, 00:24:59.766 "allow_accel_sequence": false, 00:24:59.766 "rdma_max_cq_size": 0, 00:24:59.766 "rdma_cm_event_timeout_ms": 0, 00:24:59.766 "dhchap_digests": [ 00:24:59.766 "sha256", 00:24:59.766 "sha384", 00:24:59.766 "sha512" 00:24:59.766 ], 00:24:59.766 "dhchap_dhgroups": [ 00:24:59.766 "null", 00:24:59.766 "ffdhe2048", 00:24:59.766 "ffdhe3072", 00:24:59.766 "ffdhe4096", 00:24:59.766 "ffdhe6144", 00:24:59.766 "ffdhe8192" 00:24:59.766 ] 00:24:59.766 } 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "method": "bdev_nvme_attach_controller", 00:24:59.766 "params": { 00:24:59.766 "name": "TLSTEST", 00:24:59.766 "trtype": "TCP", 00:24:59.766 "adrfam": "IPv4", 00:24:59.766 "traddr": "10.0.0.2", 00:24:59.766 "trsvcid": "4420", 00:24:59.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.766 "prchk_reftag": false, 00:24:59.766 "prchk_guard": false, 00:24:59.766 "ctrlr_loss_timeout_sec": 0, 00:24:59.766 "reconnect_delay_sec": 0, 00:24:59.766 "fast_io_fail_timeout_sec": 0, 00:24:59.766 "psk": "/tmp/tmp.YAhBXty2SN", 00:24:59.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:59.766 "hdgst": false, 00:24:59.766 "ddgst": false 00:24:59.766 } 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "method": "bdev_nvme_set_hotplug", 00:24:59.766 "params": { 00:24:59.766 "period_us": 100000, 00:24:59.766 "enable": false 00:24:59.766 } 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "method": "bdev_wait_for_examine" 00:24:59.766 } 00:24:59.766 ] 00:24:59.766 }, 00:24:59.766 { 00:24:59.766 "subsystem": "nbd", 00:24:59.766 "config": [] 00:24:59.766 } 00:24:59.766 ] 00:24:59.766 }' 00:24:59.766 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.766 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.766 [2024-07-12 11:30:46.103581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:59.766 [2024-07-12 11:30:46.103701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994134 ] 00:25:00.025 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.025 [2024-07-12 11:30:46.201856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.284 [2024-07-12 11:30:46.425222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.542 [2024-07-12 11:30:46.866681] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:00.542 [2024-07-12 11:30:46.866793] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:00.801 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.801 11:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:00.801 11:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:00.801 Running I/O for 10 seconds... 00:25:10.844 00:25:10.844 Latency(us) 00:25:10.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.844 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:10.844 Verification LBA range: start 0x0 length 0x2000 00:25:10.844 TLSTESTn1 : 10.02 4367.84 17.06 0.00 0.00 29257.96 8491.19 36700.16 00:25:10.844 =================================================================================================================== 00:25:10.844 Total : 4367.84 17.06 0.00 0.00 29257.96 8491.19 36700.16 00:25:10.844 0 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 994134 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 994134 ']' 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 994134 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994134 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994134' 00:25:10.844 killing process with pid 994134 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 994134 00:25:10.844 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.844 00:25:10.844 Latency(us) 00:25:10.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.844 =================================================================================================================== 00:25:10.844 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.844 [2024-07-12 11:30:57.174740] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:10.844 11:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 994134 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 993892 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 993892 ']' 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 993892 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 993892 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 993892' 00:25:12.220 killing process with pid 993892 00:25:12.220 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 993892 00:25:12.220 [2024-07-12 11:30:58.307786] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:12.221 11:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 993892 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=996240 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 996240 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 996240 ']' 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.598 11:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.598 [2024-07-12 11:30:59.779427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:13.598 [2024-07-12 11:30:59.779537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.598 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.598 [2024-07-12 11:30:59.892042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.857 [2024-07-12 11:31:00.112769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.857 [2024-07-12 11:31:00.112816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.857 [2024-07-12 11:31:00.112827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.857 [2024-07-12 11:31:00.112838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.857 [2024-07-12 11:31:00.112847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.857 [2024-07-12 11:31:00.112880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.YAhBXty2SN 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YAhBXty2SN 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:14.425 [2024-07-12 11:31:00.738279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.425 11:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:14.684 11:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:14.943 [2024-07-12 11:31:01.079171] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:14.943 [2024-07-12 11:31:01.079424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.943 11:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:15.202 malloc0 00:25:15.202 11:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:15.202 11:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YAhBXty2SN 00:25:15.461 [2024-07-12 11:31:01.657441] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=996691 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 996691 /var/tmp/bdevperf.sock 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 996691 ']' 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.461 11:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.461 [2024-07-12 11:31:01.746835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:15.461 [2024-07-12 11:31:01.746926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996691 ] 00:25:15.461 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.720 [2024-07-12 11:31:01.849939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.720 [2024-07-12 11:31:02.059424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.288 11:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.288 11:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:16.288 11:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YAhBXty2SN 00:25:16.547 11:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:16.547 [2024-07-12 11:31:02.857389] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:16.808 nvme0n1 00:25:16.808 11:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.808 Running I/O for 1 seconds... 00:25:17.745 00:25:17.745 Latency(us) 00:25:17.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.745 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.745 Verification LBA range: start 0x0 length 0x2000 00:25:17.745 nvme0n1 : 1.01 4492.97 17.55 0.00 0.00 28265.16 5983.72 39891.48 00:25:17.745 =================================================================================================================== 00:25:17.745 Total : 4492.97 17.55 0.00 0.00 28265.16 5983.72 39891.48 00:25:17.745 0 00:25:17.745 11:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 996691 00:25:17.745 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 996691 ']' 00:25:17.745 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 996691 00:25:17.745 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:17.745 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:17.745 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 996691 00:25:18.004 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:18.004 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:18.004 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 996691' 00:25:18.004 killing process with pid 996691 00:25:18.004 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 996691 00:25:18.004 Received shutdown signal, test time was about 1.000000 seconds 00:25:18.004 00:25:18.004 Latency(us) 00:25:18.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.004 =================================================================================================================== 00:25:18.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.004 11:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 996691 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 996240 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 996240 ']' 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 996240 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 996240 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 996240' 00:25:18.941 killing process with pid 996240 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 996240 00:25:18.941 [2024-07-12 11:31:05.221449] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:18.941 11:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 996240 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=997404 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 997404 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 997404 ']' 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.321 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.322 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.322 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.322 11:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.581 [2024-07-12 11:31:06.690250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:20.581 [2024-07-12 11:31:06.690354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.581 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.581 [2024-07-12 11:31:06.800971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.839 [2024-07-12 11:31:07.010060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.839 [2024-07-12 11:31:07.010109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.839 [2024-07-12 11:31:07.010121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.839 [2024-07-12 11:31:07.010132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.839 [2024-07-12 11:31:07.010141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.839 [2024-07-12 11:31:07.010176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.408 [2024-07-12 11:31:07.504822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.408 malloc0 00:25:21.408 [2024-07-12 11:31:07.581385] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:21.408 [2024-07-12 11:31:07.581632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=997642 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 997642 /var/tmp/bdevperf.sock 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 997642 ']' 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.408 11:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.408 [2024-07-12 11:31:07.681263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:21.409 [2024-07-12 11:31:07.681364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997642 ] 00:25:21.409 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.668 [2024-07-12 11:31:07.782788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.668 [2024-07-12 11:31:08.005523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.236 11:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.236 11:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:22.236 11:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YAhBXty2SN 00:25:22.495 11:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:22.495 [2024-07-12 11:31:08.761253] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:22.495 nvme0n1 00:25:22.754 11:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:22.754 Running I/O for 1 seconds... 00:25:23.690 00:25:23.690 Latency(us) 00:25:23.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.691 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:23.691 Verification LBA range: start 0x0 length 0x2000 00:25:23.691 nvme0n1 : 1.01 4403.31 17.20 0.00 0.00 28834.82 6439.62 24960.67 00:25:23.691 =================================================================================================================== 00:25:23.691 Total : 4403.31 17.20 0.00 0.00 28834.82 6439.62 24960.67 00:25:23.691 0 00:25:23.691 11:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:23.691 11:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.691 11:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.950 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.950 11:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:23.950 "subsystems": [ 00:25:23.950 { 00:25:23.950 "subsystem": "keyring", 00:25:23.950 "config": [ 00:25:23.950 { 00:25:23.950 "method": "keyring_file_add_key", 00:25:23.950 "params": { 00:25:23.950 "name": "key0", 00:25:23.950 "path": "/tmp/tmp.YAhBXty2SN" 00:25:23.950 } 00:25:23.950 } 00:25:23.950 ] 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "subsystem": "iobuf", 00:25:23.950 "config": [ 00:25:23.950 { 00:25:23.950 "method": "iobuf_set_options", 00:25:23.950 "params": { 00:25:23.950 "small_pool_count": 8192, 00:25:23.950 "large_pool_count": 1024, 00:25:23.950 "small_bufsize": 8192, 00:25:23.950 "large_bufsize": 135168 00:25:23.950 } 00:25:23.950 } 00:25:23.950 ] 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "subsystem": "sock", 00:25:23.950 "config": [ 00:25:23.950 { 00:25:23.950 "method": "sock_set_default_impl", 00:25:23.950 "params": { 00:25:23.950 "impl_name": "posix" 00:25:23.950 } 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "method": "sock_impl_set_options", 00:25:23.950 "params": { 00:25:23.950 "impl_name": "ssl", 00:25:23.950 "recv_buf_size": 4096, 00:25:23.950 "send_buf_size": 4096, 00:25:23.950 "enable_recv_pipe": true, 00:25:23.950 "enable_quickack": false, 00:25:23.950 "enable_placement_id": 0, 00:25:23.950 "enable_zerocopy_send_server": true, 00:25:23.950 "enable_zerocopy_send_client": false, 00:25:23.950 "zerocopy_threshold": 0, 00:25:23.950 "tls_version": 0, 00:25:23.950 "enable_ktls": false 00:25:23.950 } 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "method": "sock_impl_set_options", 00:25:23.950 "params": { 00:25:23.950 "impl_name": "posix", 00:25:23.950 "recv_buf_size": 2097152, 00:25:23.950 "send_buf_size": 2097152, 00:25:23.950 "enable_recv_pipe": true, 00:25:23.950 "enable_quickack": false, 00:25:23.950 "enable_placement_id": 0, 00:25:23.950 "enable_zerocopy_send_server": true, 00:25:23.950 "enable_zerocopy_send_client": false, 00:25:23.950 "zerocopy_threshold": 0, 00:25:23.950 "tls_version": 0, 00:25:23.950 "enable_ktls": false 00:25:23.950 } 00:25:23.950 } 00:25:23.950 ] 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "subsystem": "vmd", 00:25:23.950 "config": [] 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "subsystem": "accel", 00:25:23.950 "config": [ 00:25:23.950 { 00:25:23.950 "method": "accel_set_options", 00:25:23.950 "params": { 00:25:23.950 "small_cache_size": 128, 00:25:23.950 "large_cache_size": 16, 00:25:23.950 "task_count": 2048, 00:25:23.950 "sequence_count": 2048, 00:25:23.950 "buf_count": 2048 00:25:23.950 } 00:25:23.950 } 00:25:23.950 ] 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "subsystem": "bdev", 00:25:23.950 "config": [ 00:25:23.950 { 00:25:23.950 "method": "bdev_set_options", 00:25:23.950 "params": { 00:25:23.950 "bdev_io_pool_size": 65535, 00:25:23.950 "bdev_io_cache_size": 256, 00:25:23.950 "bdev_auto_examine": true, 00:25:23.950 "iobuf_small_cache_size": 128, 00:25:23.950 "iobuf_large_cache_size": 16 00:25:23.950 } 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "method": "bdev_raid_set_options", 00:25:23.950 "params": { 00:25:23.950 "process_window_size_kb": 1024 00:25:23.950 } 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "method": "bdev_iscsi_set_options", 00:25:23.950 "params": { 00:25:23.950 "timeout_sec": 30 00:25:23.950 } 00:25:23.950 }, 00:25:23.950 { 00:25:23.950 "method": "bdev_nvme_set_options", 00:25:23.950 "params": { 00:25:23.950 "action_on_timeout": "none", 00:25:23.950 "timeout_us": 0, 00:25:23.950 "timeout_admin_us": 0, 00:25:23.950 "keep_alive_timeout_ms": 10000, 00:25:23.950 "arbitration_burst": 0, 00:25:23.951 "low_priority_weight": 0, 00:25:23.951 "medium_priority_weight": 0, 00:25:23.951 "high_priority_weight": 0, 00:25:23.951 "nvme_adminq_poll_period_us": 10000, 00:25:23.951 "nvme_ioq_poll_period_us": 0, 00:25:23.951 "io_queue_requests": 0, 00:25:23.951 "delay_cmd_submit": true, 00:25:23.951 "transport_retry_count": 4, 00:25:23.951 "bdev_retry_count": 3, 00:25:23.951 "transport_ack_timeout": 0, 00:25:23.951 "ctrlr_loss_timeout_sec": 0, 00:25:23.951 "reconnect_delay_sec": 0, 00:25:23.951 "fast_io_fail_timeout_sec": 0, 00:25:23.951 "disable_auto_failback": false, 00:25:23.951 "generate_uuids": false, 00:25:23.951 "transport_tos": 0, 00:25:23.951 "nvme_error_stat": false, 00:25:23.951 "rdma_srq_size": 0, 00:25:23.951 "io_path_stat": false, 00:25:23.951 "allow_accel_sequence": false, 00:25:23.951 "rdma_max_cq_size": 0, 00:25:23.951 "rdma_cm_event_timeout_ms": 0, 00:25:23.951 "dhchap_digests": [ 00:25:23.951 "sha256", 00:25:23.951 "sha384", 00:25:23.951 "sha512" 00:25:23.951 ], 00:25:23.951 "dhchap_dhgroups": [ 00:25:23.951 "null", 00:25:23.951 "ffdhe2048", 00:25:23.951 "ffdhe3072", 00:25:23.951 "ffdhe4096", 00:25:23.951 "ffdhe6144", 00:25:23.951 "ffdhe8192" 00:25:23.951 ] 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "bdev_nvme_set_hotplug", 00:25:23.951 "params": { 00:25:23.951 "period_us": 100000, 00:25:23.951 "enable": false 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "bdev_malloc_create", 00:25:23.951 "params": { 00:25:23.951 "name": "malloc0", 00:25:23.951 "num_blocks": 8192, 00:25:23.951 "block_size": 4096, 00:25:23.951 "physical_block_size": 4096, 00:25:23.951 "uuid": "537d4458-f852-4b11-a0ef-c4be43c1033b", 00:25:23.951 "optimal_io_boundary": 0 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "bdev_wait_for_examine" 00:25:23.951 } 00:25:23.951 ] 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "subsystem": "nbd", 00:25:23.951 "config": [] 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "subsystem": "scheduler", 00:25:23.951 "config": [ 00:25:23.951 { 00:25:23.951 "method": "framework_set_scheduler", 00:25:23.951 "params": { 00:25:23.951 "name": "static" 00:25:23.951 } 00:25:23.951 } 00:25:23.951 ] 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "subsystem": "nvmf", 00:25:23.951 "config": [ 00:25:23.951 { 00:25:23.951 "method": "nvmf_set_config", 00:25:23.951 "params": { 00:25:23.951 "discovery_filter": "match_any", 00:25:23.951 "admin_cmd_passthru": { 00:25:23.951 "identify_ctrlr": false 00:25:23.951 } 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "nvmf_set_max_subsystems", 00:25:23.951 "params": { 00:25:23.951 "max_subsystems": 1024 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "nvmf_set_crdt", 00:25:23.951 "params": { 00:25:23.951 "crdt1": 0, 00:25:23.951 "crdt2": 0, 00:25:23.951 "crdt3": 0 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "nvmf_create_transport", 00:25:23.951 "params": { 00:25:23.951 "trtype": "TCP", 00:25:23.951 "max_queue_depth": 128, 00:25:23.951 "max_io_qpairs_per_ctrlr": 127, 00:25:23.951 "in_capsule_data_size": 4096, 00:25:23.951 "max_io_size": 131072, 00:25:23.951 "io_unit_size": 131072, 00:25:23.951 "max_aq_depth": 128, 00:25:23.951 "num_shared_buffers": 511, 00:25:23.951 "buf_cache_size": 4294967295, 00:25:23.951 "dif_insert_or_strip": false, 00:25:23.951 "zcopy": false, 00:25:23.951 "c2h_success": false, 00:25:23.951 "sock_priority": 0, 00:25:23.951 "abort_timeout_sec": 1, 00:25:23.951 "ack_timeout": 0, 00:25:23.951 "data_wr_pool_size": 0 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "nvmf_create_subsystem", 00:25:23.951 "params": { 00:25:23.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.951 "allow_any_host": false, 00:25:23.951 "serial_number": "00000000000000000000", 00:25:23.951 "model_number": "SPDK bdev Controller", 00:25:23.951 "max_namespaces": 32, 00:25:23.951 "min_cntlid": 1, 00:25:23.951 "max_cntlid": 65519, 00:25:23.951 "ana_reporting": false 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "nvmf_subsystem_add_host", 00:25:23.951 "params": { 00:25:23.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.951 "host": "nqn.2016-06.io.spdk:host1", 00:25:23.951 "psk": "key0" 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "nvmf_subsystem_add_ns", 00:25:23.951 "params": { 00:25:23.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.951 "namespace": { 00:25:23.951 "nsid": 1, 00:25:23.951 "bdev_name": "malloc0", 00:25:23.951 "nguid": "537D4458F8524B11A0EFC4BE43C1033B", 00:25:23.951 "uuid": "537d4458-f852-4b11-a0ef-c4be43c1033b", 00:25:23.951 "no_auto_visible": false 00:25:23.951 } 00:25:23.951 } 00:25:23.951 }, 00:25:23.951 { 00:25:23.951 "method": "nvmf_subsystem_add_listener", 00:25:23.951 "params": { 00:25:23.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.951 "listen_address": { 00:25:23.951 "trtype": "TCP", 00:25:23.951 "adrfam": "IPv4", 00:25:23.951 "traddr": "10.0.0.2", 00:25:23.951 "trsvcid": "4420" 00:25:23.951 }, 00:25:23.951 "secure_channel": true 00:25:23.951 } 00:25:23.951 } 00:25:23.951 ] 00:25:23.951 } 00:25:23.951 ] 00:25:23.951 }' 00:25:23.951 11:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:24.210 11:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:24.210 "subsystems": [ 00:25:24.210 { 00:25:24.210 "subsystem": "keyring", 00:25:24.210 "config": [ 00:25:24.211 { 00:25:24.211 "method": "keyring_file_add_key", 00:25:24.211 "params": { 00:25:24.211 "name": "key0", 00:25:24.211 "path": "/tmp/tmp.YAhBXty2SN" 00:25:24.211 } 00:25:24.211 } 00:25:24.211 ] 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "subsystem": "iobuf", 00:25:24.211 "config": [ 00:25:24.211 { 00:25:24.211 "method": "iobuf_set_options", 00:25:24.211 "params": { 00:25:24.211 "small_pool_count": 8192, 00:25:24.211 "large_pool_count": 1024, 00:25:24.211 "small_bufsize": 8192, 00:25:24.211 "large_bufsize": 135168 00:25:24.211 } 00:25:24.211 } 00:25:24.211 ] 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "subsystem": "sock", 00:25:24.211 "config": [ 00:25:24.211 { 00:25:24.211 "method": "sock_set_default_impl", 00:25:24.211 "params": { 00:25:24.211 "impl_name": "posix" 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "sock_impl_set_options", 00:25:24.211 "params": { 00:25:24.211 "impl_name": "ssl", 00:25:24.211 "recv_buf_size": 4096, 00:25:24.211 "send_buf_size": 4096, 00:25:24.211 "enable_recv_pipe": true, 00:25:24.211 "enable_quickack": false, 00:25:24.211 "enable_placement_id": 0, 00:25:24.211 "enable_zerocopy_send_server": true, 00:25:24.211 "enable_zerocopy_send_client": false, 00:25:24.211 "zerocopy_threshold": 0, 00:25:24.211 "tls_version": 0, 00:25:24.211 "enable_ktls": false 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "sock_impl_set_options", 00:25:24.211 "params": { 00:25:24.211 "impl_name": "posix", 00:25:24.211 "recv_buf_size": 2097152, 00:25:24.211 "send_buf_size": 2097152, 00:25:24.211 "enable_recv_pipe": true, 00:25:24.211 "enable_quickack": false, 00:25:24.211 "enable_placement_id": 0, 00:25:24.211 "enable_zerocopy_send_server": true, 00:25:24.211 "enable_zerocopy_send_client": false, 00:25:24.211 "zerocopy_threshold": 0, 00:25:24.211 "tls_version": 0, 00:25:24.211 "enable_ktls": false 00:25:24.211 } 00:25:24.211 } 00:25:24.211 ] 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "subsystem": "vmd", 00:25:24.211 "config": [] 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "subsystem": "accel", 00:25:24.211 "config": [ 00:25:24.211 { 00:25:24.211 "method": "accel_set_options", 00:25:24.211 "params": { 00:25:24.211 "small_cache_size": 128, 00:25:24.211 "large_cache_size": 16, 00:25:24.211 "task_count": 2048, 00:25:24.211 "sequence_count": 2048, 00:25:24.211 "buf_count": 2048 00:25:24.211 } 00:25:24.211 } 00:25:24.211 ] 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "subsystem": "bdev", 00:25:24.211 "config": [ 00:25:24.211 { 00:25:24.211 "method": "bdev_set_options", 00:25:24.211 "params": { 00:25:24.211 "bdev_io_pool_size": 65535, 00:25:24.211 "bdev_io_cache_size": 256, 00:25:24.211 "bdev_auto_examine": true, 00:25:24.211 "iobuf_small_cache_size": 128, 00:25:24.211 "iobuf_large_cache_size": 16 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "bdev_raid_set_options", 00:25:24.211 "params": { 00:25:24.211 "process_window_size_kb": 1024 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "bdev_iscsi_set_options", 00:25:24.211 "params": { 00:25:24.211 "timeout_sec": 30 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "bdev_nvme_set_options", 00:25:24.211 "params": { 00:25:24.211 "action_on_timeout": "none", 00:25:24.211 "timeout_us": 0, 00:25:24.211 "timeout_admin_us": 0, 00:25:24.211 "keep_alive_timeout_ms": 10000, 00:25:24.211 "arbitration_burst": 0, 00:25:24.211 "low_priority_weight": 0, 00:25:24.211 "medium_priority_weight": 0, 00:25:24.211 "high_priority_weight": 0, 00:25:24.211 "nvme_adminq_poll_period_us": 10000, 00:25:24.211 "nvme_ioq_poll_period_us": 0, 00:25:24.211 "io_queue_requests": 512, 00:25:24.211 "delay_cmd_submit": true, 00:25:24.211 "transport_retry_count": 4, 00:25:24.211 "bdev_retry_count": 3, 00:25:24.211 "transport_ack_timeout": 0, 00:25:24.211 "ctrlr_loss_timeout_sec": 0, 00:25:24.211 "reconnect_delay_sec": 0, 00:25:24.211 "fast_io_fail_timeout_sec": 0, 00:25:24.211 "disable_auto_failback": false, 00:25:24.211 "generate_uuids": false, 00:25:24.211 "transport_tos": 0, 00:25:24.211 "nvme_error_stat": false, 00:25:24.211 "rdma_srq_size": 0, 00:25:24.211 "io_path_stat": false, 00:25:24.211 "allow_accel_sequence": false, 00:25:24.211 "rdma_max_cq_size": 0, 00:25:24.211 "rdma_cm_event_timeout_ms": 0, 00:25:24.211 "dhchap_digests": [ 00:25:24.211 "sha256", 00:25:24.211 "sha384", 00:25:24.211 "sha512" 00:25:24.211 ], 00:25:24.211 "dhchap_dhgroups": [ 00:25:24.211 "null", 00:25:24.211 "ffdhe2048", 00:25:24.211 "ffdhe3072", 00:25:24.211 "ffdhe4096", 00:25:24.211 "ffdhe6144", 00:25:24.211 "ffdhe8192" 00:25:24.211 ] 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "bdev_nvme_attach_controller", 00:25:24.211 "params": { 00:25:24.211 "name": "nvme0", 00:25:24.211 "trtype": "TCP", 00:25:24.211 "adrfam": "IPv4", 00:25:24.211 "traddr": "10.0.0.2", 00:25:24.211 "trsvcid": "4420", 00:25:24.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.211 "prchk_reftag": false, 00:25:24.211 "prchk_guard": false, 00:25:24.211 "ctrlr_loss_timeout_sec": 0, 00:25:24.211 "reconnect_delay_sec": 0, 00:25:24.211 "fast_io_fail_timeout_sec": 0, 00:25:24.211 "psk": "key0", 00:25:24.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.211 "hdgst": false, 00:25:24.211 "ddgst": false 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "bdev_nvme_set_hotplug", 00:25:24.211 "params": { 00:25:24.211 "period_us": 100000, 00:25:24.211 "enable": false 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "bdev_enable_histogram", 00:25:24.211 "params": { 00:25:24.211 "name": "nvme0n1", 00:25:24.211 "enable": true 00:25:24.211 } 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "method": "bdev_wait_for_examine" 00:25:24.211 } 00:25:24.211 ] 00:25:24.211 }, 00:25:24.211 { 00:25:24.211 "subsystem": "nbd", 00:25:24.211 "config": [] 00:25:24.211 } 00:25:24.211 ] 00:25:24.211 }' 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 997642 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 997642 ']' 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 997642 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 997642 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 997642' 00:25:24.211 killing process with pid 997642 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 997642 00:25:24.211 Received shutdown signal, test time was about 1.000000 seconds 00:25:24.211 00:25:24.211 Latency(us) 00:25:24.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.211 =================================================================================================================== 00:25:24.211 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.211 11:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 997642 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 997404 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 997404 ']' 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 997404 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 997404 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 997404' 00:25:25.149 killing process with pid 997404 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 997404 00:25:25.149 11:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 997404 00:25:26.527 11:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:26.527 11:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:26.527 "subsystems": [ 00:25:26.527 { 00:25:26.527 "subsystem": "keyring", 00:25:26.527 "config": [ 00:25:26.527 { 00:25:26.527 "method": "keyring_file_add_key", 00:25:26.527 "params": { 00:25:26.527 "name": "key0", 00:25:26.527 "path": "/tmp/tmp.YAhBXty2SN" 00:25:26.527 } 00:25:26.527 } 00:25:26.527 ] 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "subsystem": "iobuf", 00:25:26.527 "config": [ 00:25:26.527 { 00:25:26.527 "method": "iobuf_set_options", 00:25:26.527 "params": { 00:25:26.527 "small_pool_count": 8192, 00:25:26.527 "large_pool_count": 1024, 00:25:26.527 "small_bufsize": 8192, 00:25:26.527 "large_bufsize": 135168 00:25:26.527 } 00:25:26.527 } 00:25:26.527 ] 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "subsystem": "sock", 00:25:26.527 "config": [ 00:25:26.527 { 00:25:26.527 "method": "sock_set_default_impl", 00:25:26.527 "params": { 00:25:26.527 "impl_name": "posix" 00:25:26.527 } 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "method": "sock_impl_set_options", 00:25:26.527 "params": { 00:25:26.527 "impl_name": "ssl", 00:25:26.527 "recv_buf_size": 4096, 00:25:26.527 "send_buf_size": 4096, 00:25:26.527 "enable_recv_pipe": true, 00:25:26.527 "enable_quickack": false, 00:25:26.527 "enable_placement_id": 0, 00:25:26.527 "enable_zerocopy_send_server": true, 00:25:26.527 "enable_zerocopy_send_client": false, 00:25:26.527 "zerocopy_threshold": 0, 00:25:26.527 "tls_version": 0, 00:25:26.527 "enable_ktls": false 00:25:26.527 } 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "method": "sock_impl_set_options", 00:25:26.527 "params": { 00:25:26.527 "impl_name": "posix", 00:25:26.527 "recv_buf_size": 2097152, 00:25:26.527 "send_buf_size": 2097152, 00:25:26.527 "enable_recv_pipe": true, 00:25:26.527 "enable_quickack": false, 00:25:26.527 "enable_placement_id": 0, 00:25:26.527 "enable_zerocopy_send_server": true, 00:25:26.527 "enable_zerocopy_send_client": false, 00:25:26.527 "zerocopy_threshold": 0, 00:25:26.527 "tls_version": 0, 00:25:26.527 "enable_ktls": false 00:25:26.527 } 00:25:26.527 } 00:25:26.527 ] 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "subsystem": "vmd", 00:25:26.527 "config": [] 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "subsystem": "accel", 00:25:26.527 "config": [ 00:25:26.527 { 00:25:26.527 "method": "accel_set_options", 00:25:26.527 "params": { 00:25:26.527 "small_cache_size": 128, 00:25:26.527 "large_cache_size": 16, 00:25:26.527 "task_count": 2048, 00:25:26.527 "sequence_count": 2048, 00:25:26.527 "buf_count": 2048 00:25:26.527 } 00:25:26.527 } 00:25:26.527 ] 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "subsystem": "bdev", 00:25:26.527 "config": [ 00:25:26.527 { 00:25:26.527 "method": "bdev_set_options", 00:25:26.527 "params": { 00:25:26.527 "bdev_io_pool_size": 65535, 00:25:26.527 "bdev_io_cache_size": 256, 00:25:26.527 "bdev_auto_examine": true, 00:25:26.527 "iobuf_small_cache_size": 128, 00:25:26.527 "iobuf_large_cache_size": 16 00:25:26.527 } 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "method": "bdev_raid_set_options", 00:25:26.527 "params": { 00:25:26.527 "process_window_size_kb": 1024 00:25:26.527 } 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "method": "bdev_iscsi_set_options", 00:25:26.527 "params": { 00:25:26.527 "timeout_sec": 30 00:25:26.527 } 00:25:26.527 }, 00:25:26.527 { 00:25:26.527 "method": "bdev_nvme_set_options", 00:25:26.527 "params": { 00:25:26.528 "action_on_timeout": "none", 00:25:26.528 "timeout_us": 0, 00:25:26.528 "timeout_admin_us": 0, 00:25:26.528 "keep_alive_timeout_ms": 10000, 00:25:26.528 "arbitration_burst": 0, 00:25:26.528 "low_priority_weight": 0, 00:25:26.528 "medium_priority_weight": 0, 00:25:26.528 "high_priority_weight": 0, 00:25:26.528 "nvme_adminq_poll_period_us": 10000, 00:25:26.528 "nvme_ioq_poll_period_us": 0, 00:25:26.528 "io_queue_requests": 0, 00:25:26.528 "delay_cmd_submit": true, 00:25:26.528 "transport_retry_count": 4, 00:25:26.528 "bdev_retry_count": 3, 00:25:26.528 "transport_ack_timeout": 0, 00:25:26.528 "ctrlr_loss_timeout_sec": 0, 00:25:26.528 "reconnect_delay_sec": 0, 00:25:26.528 "fast_io_fail_timeout_sec": 0, 00:25:26.528 "disable_auto_failback": false, 00:25:26.528 "generate_uuids": false, 00:25:26.528 "transport_tos": 0, 00:25:26.528 "nvme_error_stat": false, 00:25:26.528 "rdma_srq_size": 0, 00:25:26.528 "io_path_stat": false, 00:25:26.528 "allow_accel_sequence": false, 00:25:26.528 "rdma_max_cq_size": 0, 00:25:26.528 "rdma_cm_event_timeout_ms": 0, 00:25:26.528 "dhchap_digests": [ 00:25:26.528 "sha256", 00:25:26.528 "sha384", 00:25:26.528 "sha512" 00:25:26.528 ], 00:25:26.528 "dhchap_dhgroups": [ 00:25:26.528 "null", 00:25:26.528 "ffdhe2048", 00:25:26.528 "ffdhe3072", 00:25:26.528 "ffdhe4096", 00:25:26.528 "ffdhe6144", 00:25:26.528 "ffdhe8192" 00:25:26.528 ] 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "bdev_nvme_set_hotplug", 00:25:26.528 "params": { 00:25:26.528 "period_us": 100000, 00:25:26.528 "enable": false 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "bdev_malloc_create", 00:25:26.528 "params": { 00:25:26.528 "name": "malloc0", 00:25:26.528 "num_blocks": 8192, 00:25:26.528 "block_size": 4096, 00:25:26.528 "physical_block_size": 4096, 00:25:26.528 "uuid": "537d4458-f852-4b11-a0ef-c4be43c1033b", 00:25:26.528 "optimal_io_boundary": 0 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "bdev_wait_for_examine" 00:25:26.528 } 00:25:26.528 ] 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "subsystem": "nbd", 00:25:26.528 "config": [] 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "subsystem": "scheduler", 00:25:26.528 "config": [ 00:25:26.528 { 00:25:26.528 "method": "framework_set_scheduler", 00:25:26.528 "params": { 00:25:26.528 "name": "static" 00:25:26.528 } 00:25:26.528 } 00:25:26.528 ] 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "subsystem": "nvmf", 00:25:26.528 "config": [ 00:25:26.528 { 00:25:26.528 "method": "nvmf_set_config", 00:25:26.528 "params": { 00:25:26.528 "discovery_filter": "match_any", 00:25:26.528 "admin_cmd_passthru": { 00:25:26.528 "identify_ctrlr": false 00:25:26.528 } 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "nvmf_set_max_subsystems", 00:25:26.528 "params": { 00:25:26.528 "max_subsystems": 1024 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "nvmf_set_crdt", 00:25:26.528 "params": { 00:25:26.528 "crdt1": 0, 00:25:26.528 "crdt2": 0, 00:25:26.528 "crdt3": 0 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "nvmf_create_transport", 00:25:26.528 "params": { 00:25:26.528 "trtype": "TCP", 00:25:26.528 "max_queue_depth": 128, 00:25:26.528 "max_io_qpairs_per_ctrlr": 127, 00:25:26.528 "in_capsule_data_size": 4096, 00:25:26.528 "max_io_size": 131072, 00:25:26.528 "io_unit_size": 131072, 00:25:26.528 "max_aq_depth": 128, 00:25:26.528 "num_shared_buffers": 511, 00:25:26.528 "buf_cache_size": 4294967295, 00:25:26.528 "dif_insert_or_strip": false, 00:25:26.528 "zcopy": false, 00:25:26.528 "c2h_success": false, 00:25:26.528 "sock_priority": 0, 00:25:26.528 "abort_timeout_sec": 1, 00:25:26.528 "ack_timeout": 0, 00:25:26.528 "data_wr_pool_size": 0 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "nvmf_create_subsystem", 00:25:26.528 "params": { 00:25:26.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.528 "allow_any_host": false, 00:25:26.528 "serial_number": "00000000000000000000", 00:25:26.528 "model_number": "SPDK bdev Controller", 00:25:26.528 "max_namespaces": 32, 00:25:26.528 "min_cntlid": 1, 00:25:26.528 "max_cntlid": 65519, 00:25:26.528 "ana_reporting": false 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "nvmf_subsystem_add_host", 00:25:26.528 "params": { 00:25:26.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.528 "host": "nqn.2016-06.io.spdk:host1", 00:25:26.528 "psk": "key0" 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "nvmf_subsystem_add_ns", 00:25:26.528 "params": { 00:25:26.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.528 "namespace": { 00:25:26.528 "nsid": 1, 00:25:26.528 "bdev_name": "malloc0", 00:25:26.528 "nguid": "537D4458F8524B11A0EFC4BE43C1033B", 00:25:26.528 "uuid": "537d4458-f852-4b11-a0ef-c4be43c1033b", 00:25:26.528 "no_auto_visible": false 00:25:26.528 } 00:25:26.528 } 00:25:26.528 }, 00:25:26.528 { 00:25:26.528 "method": "nvmf_subsystem_add_listener", 00:25:26.528 "params": { 00:25:26.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.528 "listen_address": { 00:25:26.528 "trtype": "TCP", 00:25:26.528 "adrfam": "IPv4", 00:25:26.528 "traddr": "10.0.0.2", 00:25:26.528 "trsvcid": "4420" 00:25:26.528 }, 00:25:26.528 "secure_channel": true 00:25:26.528 } 00:25:26.528 } 00:25:26.528 ] 00:25:26.528 } 00:25:26.528 ] 00:25:26.528 }' 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=998571 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 998571 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 998571 ']' 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.528 11:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.791 [2024-07-12 11:31:12.897519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:26.791 [2024-07-12 11:31:12.897608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.791 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.791 [2024-07-12 11:31:12.998851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.050 [2024-07-12 11:31:13.213029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.050 [2024-07-12 11:31:13.213071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.050 [2024-07-12 11:31:13.213083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.050 [2024-07-12 11:31:13.213097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.050 [2024-07-12 11:31:13.213107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.050 [2024-07-12 11:31:13.213195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.617 [2024-07-12 11:31:13.762093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.617 [2024-07-12 11:31:13.794112] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:27.617 [2024-07-12 11:31:13.794316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=998697 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 998697 /var/tmp/bdevperf.sock 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 998697 ']' 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.617 11:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:27.618 "subsystems": [ 00:25:27.618 { 00:25:27.618 "subsystem": "keyring", 00:25:27.618 "config": [ 00:25:27.618 { 00:25:27.618 "method": "keyring_file_add_key", 00:25:27.618 "params": { 00:25:27.618 "name": "key0", 00:25:27.618 "path": "/tmp/tmp.YAhBXty2SN" 00:25:27.618 } 00:25:27.618 } 00:25:27.618 ] 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "subsystem": "iobuf", 00:25:27.618 "config": [ 00:25:27.618 { 00:25:27.618 "method": "iobuf_set_options", 00:25:27.618 "params": { 00:25:27.618 "small_pool_count": 8192, 00:25:27.618 "large_pool_count": 1024, 00:25:27.618 "small_bufsize": 8192, 00:25:27.618 "large_bufsize": 135168 00:25:27.618 } 00:25:27.618 } 00:25:27.618 ] 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "subsystem": "sock", 00:25:27.618 "config": [ 00:25:27.618 { 00:25:27.618 "method": "sock_set_default_impl", 00:25:27.618 "params": { 00:25:27.618 "impl_name": "posix" 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "sock_impl_set_options", 00:25:27.618 "params": { 00:25:27.618 "impl_name": "ssl", 00:25:27.618 "recv_buf_size": 4096, 00:25:27.618 "send_buf_size": 4096, 00:25:27.618 "enable_recv_pipe": true, 00:25:27.618 "enable_quickack": false, 00:25:27.618 "enable_placement_id": 0, 00:25:27.618 "enable_zerocopy_send_server": true, 00:25:27.618 "enable_zerocopy_send_client": false, 00:25:27.618 "zerocopy_threshold": 0, 00:25:27.618 "tls_version": 0, 00:25:27.618 "enable_ktls": false 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "sock_impl_set_options", 00:25:27.618 "params": { 00:25:27.618 "impl_name": "posix", 00:25:27.618 "recv_buf_size": 2097152, 00:25:27.618 "send_buf_size": 2097152, 00:25:27.618 "enable_recv_pipe": true, 00:25:27.618 "enable_quickack": false, 00:25:27.618 "enable_placement_id": 0, 00:25:27.618 "enable_zerocopy_send_server": true, 00:25:27.618 "enable_zerocopy_send_client": false, 00:25:27.618 "zerocopy_threshold": 0, 00:25:27.618 "tls_version": 0, 00:25:27.618 "enable_ktls": false 00:25:27.618 } 00:25:27.618 } 00:25:27.618 ] 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "subsystem": "vmd", 00:25:27.618 "config": [] 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "subsystem": "accel", 00:25:27.618 "config": [ 00:25:27.618 { 00:25:27.618 "method": "accel_set_options", 00:25:27.618 "params": { 00:25:27.618 "small_cache_size": 128, 00:25:27.618 "large_cache_size": 16, 00:25:27.618 "task_count": 2048, 00:25:27.618 "sequence_count": 2048, 00:25:27.618 "buf_count": 2048 00:25:27.618 } 00:25:27.618 } 00:25:27.618 ] 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "subsystem": "bdev", 00:25:27.618 "config": [ 00:25:27.618 { 00:25:27.618 "method": "bdev_set_options", 00:25:27.618 "params": { 00:25:27.618 "bdev_io_pool_size": 65535, 00:25:27.618 "bdev_io_cache_size": 256, 00:25:27.618 "bdev_auto_examine": true, 00:25:27.618 "iobuf_small_cache_size": 128, 00:25:27.618 "iobuf_large_cache_size": 16 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "bdev_raid_set_options", 00:25:27.618 "params": { 00:25:27.618 "process_window_size_kb": 1024 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "bdev_iscsi_set_options", 00:25:27.618 "params": { 00:25:27.618 "timeout_sec": 30 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "bdev_nvme_set_options", 00:25:27.618 "params": { 00:25:27.618 "action_on_timeout": "none", 00:25:27.618 "timeout_us": 0, 00:25:27.618 "timeout_admin_us": 0, 00:25:27.618 "keep_alive_timeout_ms": 10000, 00:25:27.618 "arbitration_burst": 0, 00:25:27.618 "low_priority_weight": 0, 00:25:27.618 "medium_priority_weight": 0, 00:25:27.618 "high_priority_weight": 0, 00:25:27.618 "nvme_adminq_poll_period_us": 10000, 00:25:27.618 "nvme_ioq_poll_period_us": 0, 00:25:27.618 "io_queue_requests": 512, 00:25:27.618 "delay_cmd_submit": true, 00:25:27.618 "transport_retry_count": 4, 00:25:27.618 "bdev_retry_count": 3, 00:25:27.618 "transport_ack_timeout": 0, 00:25:27.618 "ctrlr_loss_timeout_sec": 0, 00:25:27.618 "reconnect_delay_sec": 0, 00:25:27.618 "fast_io_fail_timeout_sec": 0, 00:25:27.618 "disable_auto_failback": false, 00:25:27.618 "generate_uuids": false, 00:25:27.618 "transport_tos": 0, 00:25:27.618 "nvme_error_stat": false, 00:25:27.618 "rdma_srq_size": 0, 00:25:27.618 "io_path_stat": false, 00:25:27.618 "allow_accel_sequence": false, 00:25:27.618 "rdma_max_cq_size": 0, 00:25:27.618 "rdma_cm_event_timeout_ms": 0, 00:25:27.618 "dhchap_digests": [ 00:25:27.618 "sha256", 00:25:27.618 "sha384", 00:25:27.618 "sha512" 00:25:27.618 ], 00:25:27.618 "dhchap_dhgroups": [ 00:25:27.618 "null", 00:25:27.618 "ffdhe2048", 00:25:27.618 "ffdhe3072", 00:25:27.618 "ffdhe4096", 00:25:27.618 "ffdhe6144", 00:25:27.618 "ffdhe8192" 00:25:27.618 ] 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "bdev_nvme_attach_controller", 00:25:27.618 "params": { 00:25:27.618 "name": "nvme0", 00:25:27.618 "trtype": "TCP", 00:25:27.618 "adrfam": "IPv4", 00:25:27.618 "traddr": "10.0.0.2", 00:25:27.618 "trsvcid": "4420", 00:25:27.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.618 "prchk_reftag": false, 00:25:27.618 "prchk_guard": false, 00:25:27.618 "ctrlr_loss_timeout_sec": 0, 00:25:27.618 "reconnect_delay_sec": 0, 00:25:27.618 "fast_io_fail_timeout_sec": 0, 00:25:27.618 "psk": "key0", 00:25:27.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.618 "hdgst": false, 00:25:27.618 "ddgst": false 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "bdev_nvme_set_hotplug", 00:25:27.618 "params": { 00:25:27.618 "period_us": 100000, 00:25:27.618 "enable": false 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "bdev_enable_histogram", 00:25:27.618 "params": { 00:25:27.618 "name": "nvme0n1", 00:25:27.618 "enable": true 00:25:27.618 } 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "method": "bdev_wait_for_examine" 00:25:27.618 } 00:25:27.618 ] 00:25:27.618 }, 00:25:27.618 { 00:25:27.618 "subsystem": "nbd", 00:25:27.618 "config": [] 00:25:27.618 } 00:25:27.618 ] 00:25:27.618 }' 00:25:27.618 11:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.618 [2024-07-12 11:31:13.923112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:27.618 [2024-07-12 11:31:13.923206] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998697 ] 00:25:27.877 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.877 [2024-07-12 11:31:14.025422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.136 [2024-07-12 11:31:14.250344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.395 [2024-07-12 11:31:14.704569] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:28.654 11:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.654 11:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:28.654 11:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:28.654 11:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:28.654 11:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.654 11:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:28.912 Running I/O for 1 seconds... 00:25:29.849 00:25:29.849 Latency(us) 00:25:29.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.849 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:29.849 Verification LBA range: start 0x0 length 0x2000 00:25:29.849 nvme0n1 : 1.02 4443.19 17.36 0.00 0.00 28521.40 7807.33 33052.94 00:25:29.849 =================================================================================================================== 00:25:29.849 Total : 4443.19 17.36 0.00 0.00 28521.40 7807.33 33052.94 00:25:29.849 0 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:29.849 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:29.849 nvmf_trace.0 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 998697 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 998697 ']' 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 998697 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 998697 00:25:30.108 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:30.109 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:30.109 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 998697' 00:25:30.109 killing process with pid 998697 00:25:30.109 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 998697 00:25:30.109 Received shutdown signal, test time was about 1.000000 seconds 00:25:30.109 00:25:30.109 Latency(us) 00:25:30.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.109 =================================================================================================================== 00:25:30.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.109 11:31:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 998697 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.046 rmmod nvme_tcp 00:25:31.046 rmmod nvme_fabrics 00:25:31.046 rmmod nvme_keyring 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 998571 ']' 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 998571 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 998571 ']' 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 998571 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 998571 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 998571' 00:25:31.046 killing process with pid 998571 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 998571 00:25:31.046 11:31:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 998571 00:25:32.420 11:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.420 11:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.420 11:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.420 11:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.420 11:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.421 11:31:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.421 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.421 11:31:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.981 11:31:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:34.981 11:31:20 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2OOpA8GfQW /tmp/tmp.vJ2nm1dGbt /tmp/tmp.YAhBXty2SN 00:25:34.981 00:25:34.981 real 1m45.786s 00:25:34.981 user 2m41.961s 00:25:34.981 sys 0m29.356s 00:25:34.981 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:34.981 11:31:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:34.981 ************************************ 00:25:34.981 END TEST nvmf_tls 00:25:34.981 ************************************ 00:25:34.981 11:31:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:34.981 11:31:20 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:34.981 11:31:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:34.981 11:31:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.981 11:31:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:34.981 ************************************ 00:25:34.981 START TEST nvmf_fips 00:25:34.981 ************************************ 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:34.981 * Looking for test storage... 00:25:34.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.981 11:31:20 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.982 11:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:34.982 Error setting digest 00:25:34.982 00C21221257F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:34.982 00C21221257F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:34.982 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:34.983 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.983 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.983 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.983 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:34.983 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:34.983 11:31:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:34.983 11:31:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:40.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:40.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:40.257 Found net devices under 0000:86:00.0: cvl_0_0 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:40.257 Found net devices under 0000:86:00.1: cvl_0_1 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.257 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.258 11:31:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:25:40.258 00:25:40.258 --- 10.0.0.2 ping statistics --- 00:25:40.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.258 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:25:40.258 00:25:40.258 --- 10.0.0.1 ping statistics --- 00:25:40.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.258 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1002845 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1002845 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1002845 ']' 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.258 11:31:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:40.258 [2024-07-12 11:31:26.294531] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:40.258 [2024-07-12 11:31:26.294622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.258 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.258 [2024-07-12 11:31:26.401768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.517 [2024-07-12 11:31:26.620571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.517 [2024-07-12 11:31:26.620615] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.517 [2024-07-12 11:31:26.620626] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.517 [2024-07-12 11:31:26.620656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.517 [2024-07-12 11:31:26.620665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.517 [2024-07-12 11:31:26.620699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:40.777 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:41.037 [2024-07-12 11:31:27.227042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.037 [2024-07-12 11:31:27.243023] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:41.037 [2024-07-12 11:31:27.243259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.037 [2024-07-12 11:31:27.315987] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:41.037 malloc0 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1003092 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1003092 /var/tmp/bdevperf.sock 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1003092 ']' 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.037 11:31:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:41.297 [2024-07-12 11:31:27.433364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:41.297 [2024-07-12 11:31:27.433529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003092 ] 00:25:41.297 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.297 [2024-07-12 11:31:27.532289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.556 [2024-07-12 11:31:27.752948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.124 11:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.124 11:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:42.124 11:31:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:42.124 [2024-07-12 11:31:28.340105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:42.124 [2024-07-12 11:31:28.340233] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:42.124 TLSTESTn1 00:25:42.124 11:31:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:42.383 Running I/O for 10 seconds... 00:25:52.360 00:25:52.360 Latency(us) 00:25:52.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.360 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:52.360 Verification LBA range: start 0x0 length 0x2000 00:25:52.360 TLSTESTn1 : 10.04 4617.42 18.04 0.00 0.00 27656.53 6211.67 34648.60 00:25:52.360 =================================================================================================================== 00:25:52.360 Total : 4617.42 18.04 0.00 0.00 27656.53 6211.67 34648.60 00:25:52.360 0 00:25:52.360 11:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:52.360 11:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:52.360 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:52.360 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:52.360 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:52.361 nvmf_trace.0 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1003092 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1003092 ']' 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1003092 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1003092 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1003092' 00:25:52.361 killing process with pid 1003092 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1003092 00:25:52.361 Received shutdown signal, test time was about 10.000000 seconds 00:25:52.361 00:25:52.361 Latency(us) 00:25:52.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.361 =================================================================================================================== 00:25:52.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.361 [2024-07-12 11:31:38.709554] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:52.361 11:31:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1003092 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:53.740 rmmod nvme_tcp 00:25:53.740 rmmod nvme_fabrics 00:25:53.740 rmmod nvme_keyring 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1002845 ']' 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1002845 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1002845 ']' 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1002845 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1002845 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1002845' 00:25:53.740 killing process with pid 1002845 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1002845 00:25:53.740 [2024-07-12 11:31:39.898399] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:53.740 11:31:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1002845 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.119 11:31:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:57.653 00:25:57.653 real 0m22.529s 00:25:57.653 user 0m25.694s 00:25:57.653 sys 0m8.471s 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:57.653 ************************************ 00:25:57.653 END TEST nvmf_fips 00:25:57.653 ************************************ 00:25:57.653 11:31:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:57.653 11:31:43 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:57.653 11:31:43 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:57.653 11:31:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:57.653 11:31:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.653 11:31:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:57.653 ************************************ 00:25:57.653 START TEST nvmf_fuzz 00:25:57.653 ************************************ 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:57.653 * Looking for test storage... 00:25:57.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.653 11:31:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:02.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:02.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:02.924 Found net devices under 0000:86:00.0: cvl_0_0 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:02.924 Found net devices under 0000:86:00.1: cvl_0_1 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.924 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:02.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:26:02.924 00:26:02.924 --- 10.0.0.2 ping statistics --- 00:26:02.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.925 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:26:02.925 00:26:02.925 --- 10.0.0.1 ping statistics --- 00:26:02.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.925 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1008673 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1008673 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1008673 ']' 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.925 11:31:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 Malloc0 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:03.493 11:31:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:35.572 Fuzzing completed. Shutting down the fuzz application 00:26:35.573 00:26:35.573 Dumping successful admin opcodes: 00:26:35.573 8, 9, 10, 24, 00:26:35.573 Dumping successful io opcodes: 00:26:35.573 0, 9, 00:26:35.573 NS: 0x200003aefec0 I/O qp, Total commands completed: 668031, total successful commands: 3908, random_seed: 1025637376 00:26:35.573 NS: 0x200003aefec0 admin qp, Total commands completed: 77751, total successful commands: 600, random_seed: 2275238016 00:26:35.573 11:32:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:36.552 Fuzzing completed. Shutting down the fuzz application 00:26:36.552 00:26:36.552 Dumping successful admin opcodes: 00:26:36.552 24, 00:26:36.552 Dumping successful io opcodes: 00:26:36.552 00:26:36.552 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2769126306 00:26:36.552 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2769237932 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.552 rmmod nvme_tcp 00:26:36.552 rmmod nvme_fabrics 00:26:36.552 rmmod nvme_keyring 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1008673 ']' 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1008673 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1008673 ']' 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1008673 00:26:36.552 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:26:36.810 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.810 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1008673 00:26:36.810 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.810 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.810 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1008673' 00:26:36.810 killing process with pid 1008673 00:26:36.810 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1008673 00:26:36.810 11:32:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1008673 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.184 11:32:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.719 11:32:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.719 11:32:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:40.719 00:26:40.719 real 0m43.017s 00:26:40.719 user 0m58.485s 00:26:40.719 sys 0m15.217s 00:26:40.719 11:32:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.719 11:32:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:40.719 ************************************ 00:26:40.719 END TEST nvmf_fuzz 00:26:40.719 ************************************ 00:26:40.719 11:32:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:40.719 11:32:26 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:40.720 11:32:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:40.720 11:32:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.720 11:32:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:40.720 ************************************ 00:26:40.720 START TEST nvmf_multiconnection 00:26:40.720 ************************************ 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:40.720 * Looking for test storage... 00:26:40.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.720 11:32:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:46.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:46.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.012 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:46.013 Found net devices under 0000:86:00.0: cvl_0_0 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:46.013 Found net devices under 0000:86:00.1: cvl_0_1 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.013 11:32:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:46.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:26:46.013 00:26:46.013 --- 10.0.0.2 ping statistics --- 00:26:46.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.013 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:26:46.013 00:26:46.013 --- 10.0.0.1 ping statistics --- 00:26:46.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.013 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1017918 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1017918 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1017918 ']' 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:46.013 11:32:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.013 [2024-07-12 11:32:32.359592] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:46.013 [2024-07-12 11:32:32.359691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.272 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.272 [2024-07-12 11:32:32.472145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.530 [2024-07-12 11:32:32.693851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.530 [2024-07-12 11:32:32.693898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.530 [2024-07-12 11:32:32.693910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.530 [2024-07-12 11:32:32.693920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.530 [2024-07-12 11:32:32.693929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.530 [2024-07-12 11:32:32.693999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.530 [2024-07-12 11:32:32.694069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.530 [2024-07-12 11:32:32.694092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.530 [2024-07-12 11:32:32.694084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.794 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.794 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:26:46.794 11:32:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.794 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.794 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 [2024-07-12 11:32:33.181291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 Malloc1 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 [2024-07-12 11:32:33.304720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 Malloc2 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.052 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 Malloc3 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 Malloc4 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.311 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.570 Malloc5 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.570 Malloc6 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:47.570 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.571 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 Malloc7 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 Malloc8 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 Malloc9 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.830 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 Malloc10 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 Malloc11 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.089 11:32:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:49.467 11:32:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:49.467 11:32:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:49.467 11:32:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.467 11:32:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:49.467 11:32:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.374 11:32:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:52.754 11:32:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:52.754 11:32:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:52.754 11:32:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:52.754 11:32:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:52.754 11:32:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.657 11:32:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:55.592 11:32:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:55.592 11:32:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:55.592 11:32:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:55.592 11:32:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:55.592 11:32:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.124 11:32:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:59.059 11:32:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:59.059 11:32:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:59.059 11:32:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.059 11:32:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:59.059 11:32:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.960 11:32:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:02.333 11:32:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:02.333 11:32:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:02.333 11:32:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:02.333 11:32:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:02.333 11:32:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:04.233 11:32:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:05.609 11:32:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:05.609 11:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:05.609 11:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:05.609 11:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:05.609 11:32:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.567 11:32:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:08.944 11:32:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:08.944 11:32:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:08.944 11:32:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:08.944 11:32:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:08.944 11:32:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.847 11:32:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:12.222 11:32:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:12.222 11:32:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:12.222 11:32:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:12.222 11:32:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:12.222 11:32:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:14.754 11:33:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:15.691 11:33:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:15.691 11:33:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:15.691 11:33:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:15.691 11:33:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:15.691 11:33:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.592 11:33:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:18.962 11:33:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:18.962 11:33:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:18.962 11:33:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:18.962 11:33:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:18.962 11:33:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:21.491 11:33:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:22.423 11:33:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:22.423 11:33:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:22.423 11:33:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:22.423 11:33:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:22.423 11:33:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:24.952 11:33:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:24.952 11:33:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:24.952 11:33:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:24.952 11:33:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:24.952 11:33:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:24.952 11:33:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:24.952 11:33:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:24.952 [global] 00:27:24.952 thread=1 00:27:24.952 invalidate=1 00:27:24.952 rw=read 00:27:24.952 time_based=1 00:27:24.952 runtime=10 00:27:24.952 ioengine=libaio 00:27:24.952 direct=1 00:27:24.952 bs=262144 00:27:24.952 iodepth=64 00:27:24.952 norandommap=1 00:27:24.952 numjobs=1 00:27:24.952 00:27:24.952 [job0] 00:27:24.952 filename=/dev/nvme0n1 00:27:24.952 [job1] 00:27:24.952 filename=/dev/nvme10n1 00:27:24.952 [job2] 00:27:24.952 filename=/dev/nvme1n1 00:27:24.952 [job3] 00:27:24.952 filename=/dev/nvme2n1 00:27:24.952 [job4] 00:27:24.952 filename=/dev/nvme3n1 00:27:24.952 [job5] 00:27:24.952 filename=/dev/nvme4n1 00:27:24.952 [job6] 00:27:24.952 filename=/dev/nvme5n1 00:27:24.952 [job7] 00:27:24.952 filename=/dev/nvme6n1 00:27:24.952 [job8] 00:27:24.952 filename=/dev/nvme7n1 00:27:24.952 [job9] 00:27:24.952 filename=/dev/nvme8n1 00:27:24.952 [job10] 00:27:24.952 filename=/dev/nvme9n1 00:27:24.952 Could not set queue depth (nvme0n1) 00:27:24.952 Could not set queue depth (nvme10n1) 00:27:24.952 Could not set queue depth (nvme1n1) 00:27:24.952 Could not set queue depth (nvme2n1) 00:27:24.952 Could not set queue depth (nvme3n1) 00:27:24.952 Could not set queue depth (nvme4n1) 00:27:24.952 Could not set queue depth (nvme5n1) 00:27:24.952 Could not set queue depth (nvme6n1) 00:27:24.952 Could not set queue depth (nvme7n1) 00:27:24.952 Could not set queue depth (nvme8n1) 00:27:24.952 Could not set queue depth (nvme9n1) 00:27:24.952 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:24.952 fio-3.35 00:27:24.952 Starting 11 threads 00:27:37.159 00:27:37.159 job0: (groupid=0, jobs=1): err= 0: pid=1025100: Fri Jul 12 11:33:21 2024 00:27:37.159 read: IOPS=717, BW=179MiB/s (188MB/s)(1807MiB/10069msec) 00:27:37.159 slat (usec): min=7, max=164174, avg=699.54, stdev=5142.74 00:27:37.159 clat (usec): min=775, max=434386, avg=88359.30, stdev=62267.14 00:27:37.159 lat (usec): min=812, max=434411, avg=89058.84, stdev=62894.36 00:27:37.159 clat percentiles (msec): 00:27:37.159 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 27], 00:27:37.159 | 30.00th=[ 41], 40.00th=[ 58], 50.00th=[ 78], 60.00th=[ 102], 00:27:37.159 | 70.00th=[ 122], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 197], 00:27:37.159 | 99.00th=[ 247], 99.50th=[ 268], 99.90th=[ 279], 99.95th=[ 279], 00:27:37.159 | 99.99th=[ 435] 00:27:37.159 bw ( KiB/s): min=97792, max=400896, per=9.35%, avg=183424.00, stdev=70508.59, samples=20 00:27:37.159 iops : min= 382, max= 1566, avg=716.50, stdev=275.42, samples=20 00:27:37.159 lat (usec) : 1000=0.06% 00:27:37.159 lat (msec) : 2=0.12%, 4=1.02%, 10=4.29%, 20=7.87%, 50=22.37% 00:27:37.159 lat (msec) : 100=23.83%, 250=39.55%, 500=0.89% 00:27:37.159 cpu : usr=0.29%, sys=2.26%, ctx=1400, majf=0, minf=4097 00:27:37.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:37.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.159 issued rwts: total=7229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.159 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.159 job1: (groupid=0, jobs=1): err= 0: pid=1025102: Fri Jul 12 11:33:21 2024 00:27:37.159 read: IOPS=644, BW=161MiB/s (169MB/s)(1630MiB/10125msec) 00:27:37.159 slat (usec): min=10, max=180956, avg=1328.43, stdev=5184.84 00:27:37.159 clat (usec): min=1175, max=334606, avg=97951.39, stdev=55203.66 00:27:37.159 lat (usec): min=1199, max=379534, avg=99279.82, stdev=55977.71 00:27:37.159 clat percentiles (msec): 00:27:37.159 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 35], 20.00th=[ 54], 00:27:37.159 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 87], 60.00th=[ 104], 00:27:37.159 | 70.00th=[ 122], 80.00th=[ 146], 90.00th=[ 180], 95.00th=[ 201], 00:27:37.159 | 99.00th=[ 245], 99.50th=[ 279], 99.90th=[ 317], 99.95th=[ 330], 00:27:37.159 | 99.99th=[ 334] 00:27:37.159 bw ( KiB/s): min=83456, max=322048, per=8.42%, avg=165324.80, stdev=64103.05, samples=20 00:27:37.159 iops : min= 326, max= 1258, avg=645.80, stdev=250.40, samples=20 00:27:37.159 lat (msec) : 2=0.55%, 4=0.41%, 10=0.78%, 20=2.76%, 50=12.28% 00:27:37.159 lat (msec) : 100=41.13%, 250=41.16%, 500=0.92% 00:27:37.159 cpu : usr=0.24%, sys=2.58%, ctx=1212, majf=0, minf=4097 00:27:37.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:37.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.159 issued rwts: total=6521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job2: (groupid=0, jobs=1): err= 0: pid=1025103: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=735, BW=184MiB/s (193MB/s)(1852MiB/10072msec) 00:27:37.160 slat (usec): min=10, max=101961, avg=746.45, stdev=4490.10 00:27:37.160 clat (msec): min=2, max=344, avg=86.19, stdev=62.55 00:27:37.160 lat (msec): min=2, max=344, avg=86.93, stdev=63.24 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 20], 20.00th=[ 30], 00:27:37.160 | 30.00th=[ 41], 40.00th=[ 51], 50.00th=[ 69], 60.00th=[ 93], 00:27:37.160 | 70.00th=[ 116], 80.00th=[ 142], 90.00th=[ 182], 95.00th=[ 205], 00:27:37.160 | 99.00th=[ 255], 99.50th=[ 264], 99.90th=[ 284], 99.95th=[ 313], 00:27:37.160 | 99.99th=[ 347] 00:27:37.160 bw ( KiB/s): min=68608, max=390656, per=9.58%, avg=187955.20, stdev=88571.39, samples=20 00:27:37.160 iops : min= 268, max= 1526, avg=734.20, stdev=345.98, samples=20 00:27:37.160 lat (msec) : 4=0.32%, 10=2.92%, 20=7.44%, 50=29.25%, 100=22.82% 00:27:37.160 lat (msec) : 250=36.02%, 500=1.23% 00:27:37.160 cpu : usr=0.24%, sys=2.64%, ctx=1392, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=7406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job3: (groupid=0, jobs=1): err= 0: pid=1025104: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=866, BW=217MiB/s (227MB/s)(2192MiB/10120msec) 00:27:37.160 slat (usec): min=10, max=186029, avg=725.87, stdev=4100.73 00:27:37.160 clat (msec): min=2, max=321, avg=73.06, stdev=47.92 00:27:37.160 lat (msec): min=2, max=321, avg=73.79, stdev=48.42 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 19], 20.00th=[ 34], 00:27:37.160 | 30.00th=[ 47], 40.00th=[ 55], 50.00th=[ 64], 60.00th=[ 71], 00:27:37.160 | 70.00th=[ 85], 80.00th=[ 109], 90.00th=[ 140], 95.00th=[ 176], 00:27:37.160 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 241], 99.95th=[ 249], 00:27:37.160 | 99.99th=[ 321] 00:27:37.160 bw ( KiB/s): min=90624, max=363008, per=11.36%, avg=222848.25, stdev=85169.34, samples=20 00:27:37.160 iops : min= 354, max= 1418, avg=870.50, stdev=332.69, samples=20 00:27:37.160 lat (msec) : 4=0.51%, 10=2.62%, 20=7.28%, 50=23.15%, 100=43.94% 00:27:37.160 lat (msec) : 250=22.45%, 500=0.05% 00:27:37.160 cpu : usr=0.30%, sys=2.97%, ctx=1490, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=8768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job4: (groupid=0, jobs=1): err= 0: pid=1025105: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=643, BW=161MiB/s (169MB/s)(1630MiB/10126msec) 00:27:37.160 slat (usec): min=11, max=125524, avg=1282.13, stdev=5533.58 00:27:37.160 clat (usec): min=1834, max=319440, avg=97993.30, stdev=61414.68 00:27:37.160 lat (usec): min=1869, max=345090, avg=99275.42, stdev=62412.15 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 40], 00:27:37.160 | 30.00th=[ 53], 40.00th=[ 69], 50.00th=[ 86], 60.00th=[ 106], 00:27:37.160 | 70.00th=[ 133], 80.00th=[ 157], 90.00th=[ 190], 95.00th=[ 207], 00:27:37.160 | 99.00th=[ 255], 99.50th=[ 266], 99.90th=[ 313], 99.95th=[ 313], 00:27:37.160 | 99.99th=[ 321] 00:27:37.160 bw ( KiB/s): min=64512, max=350208, per=8.42%, avg=165324.80, stdev=92986.85, samples=20 00:27:37.160 iops : min= 252, max= 1368, avg=645.80, stdev=363.23, samples=20 00:27:37.160 lat (msec) : 2=0.03%, 4=0.66%, 10=1.06%, 20=4.00%, 50=22.31% 00:27:37.160 lat (msec) : 100=29.80%, 250=40.98%, 500=1.17% 00:27:37.160 cpu : usr=0.29%, sys=2.53%, ctx=1236, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=6521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job5: (groupid=0, jobs=1): err= 0: pid=1025106: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=648, BW=162MiB/s (170MB/s)(1641MiB/10123msec) 00:27:37.160 slat (usec): min=10, max=142942, avg=1045.10, stdev=4831.13 00:27:37.160 clat (msec): min=2, max=292, avg=97.56, stdev=58.67 00:27:37.160 lat (msec): min=2, max=392, avg=98.60, stdev=59.37 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 30], 20.00th=[ 47], 00:27:37.160 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 85], 60.00th=[ 101], 00:27:37.160 | 70.00th=[ 121], 80.00th=[ 153], 90.00th=[ 188], 95.00th=[ 211], 00:27:37.160 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 279], 99.95th=[ 279], 00:27:37.160 | 99.99th=[ 292] 00:27:37.160 bw ( KiB/s): min=79360, max=340992, per=8.48%, avg=166425.60, stdev=77469.06, samples=20 00:27:37.160 iops : min= 310, max= 1332, avg=650.10, stdev=302.61, samples=20 00:27:37.160 lat (msec) : 4=0.21%, 10=1.61%, 20=2.50%, 50=18.45%, 100=37.51% 00:27:37.160 lat (msec) : 250=39.08%, 500=0.64% 00:27:37.160 cpu : usr=0.25%, sys=2.37%, ctx=1166, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=6564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job6: (groupid=0, jobs=1): err= 0: pid=1025107: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=612, BW=153MiB/s (160MB/s)(1549MiB/10124msec) 00:27:37.160 slat (usec): min=10, max=177422, avg=1201.99, stdev=5028.40 00:27:37.160 clat (msec): min=2, max=390, avg=103.25, stdev=64.61 00:27:37.160 lat (msec): min=2, max=390, avg=104.46, stdev=65.31 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 32], 00:27:37.160 | 30.00th=[ 54], 40.00th=[ 84], 50.00th=[ 102], 60.00th=[ 118], 00:27:37.160 | 70.00th=[ 138], 80.00th=[ 165], 90.00th=[ 194], 95.00th=[ 211], 00:27:37.160 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 284], 00:27:37.160 | 99.99th=[ 393] 00:27:37.160 bw ( KiB/s): min=83456, max=311919, per=8.00%, avg=157010.35, stdev=68424.12, samples=20 00:27:37.160 iops : min= 326, max= 1218, avg=613.30, stdev=267.23, samples=20 00:27:37.160 lat (msec) : 4=0.68%, 10=2.18%, 20=4.49%, 50=21.89%, 100=20.29% 00:27:37.160 lat (msec) : 250=49.29%, 500=1.19% 00:27:37.160 cpu : usr=0.24%, sys=2.27%, ctx=1235, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=6196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job7: (groupid=0, jobs=1): err= 0: pid=1025108: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=602, BW=151MiB/s (158MB/s)(1525MiB/10127msec) 00:27:37.160 slat (usec): min=14, max=150827, avg=985.01, stdev=6001.97 00:27:37.160 clat (msec): min=2, max=312, avg=105.17, stdev=68.37 00:27:37.160 lat (msec): min=2, max=330, avg=106.16, stdev=69.30 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 17], 20.00th=[ 37], 00:27:37.160 | 30.00th=[ 57], 40.00th=[ 73], 50.00th=[ 99], 60.00th=[ 124], 00:27:37.160 | 70.00th=[ 148], 80.00th=[ 171], 90.00th=[ 201], 95.00th=[ 220], 00:27:37.160 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 296], 00:27:37.160 | 99.99th=[ 313] 00:27:37.160 bw ( KiB/s): min=75264, max=279552, per=7.87%, avg=154470.40, stdev=60416.71, samples=20 00:27:37.160 iops : min= 294, max= 1092, avg=603.40, stdev=236.00, samples=20 00:27:37.160 lat (msec) : 4=0.21%, 10=3.26%, 20=9.02%, 50=13.87%, 100=24.48% 00:27:37.160 lat (msec) : 250=46.95%, 500=2.20% 00:27:37.160 cpu : usr=0.20%, sys=1.99%, ctx=1216, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=6098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job8: (groupid=0, jobs=1): err= 0: pid=1025109: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=635, BW=159MiB/s (167MB/s)(1608MiB/10123msec) 00:27:37.160 slat (usec): min=14, max=128634, avg=881.13, stdev=5115.20 00:27:37.160 clat (usec): min=870, max=289849, avg=99755.93, stdev=66120.69 00:27:37.160 lat (usec): min=898, max=366826, avg=100637.06, stdev=66854.16 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 16], 20.00th=[ 34], 00:27:37.160 | 30.00th=[ 56], 40.00th=[ 69], 50.00th=[ 93], 60.00th=[ 114], 00:27:37.160 | 70.00th=[ 136], 80.00th=[ 169], 90.00th=[ 197], 95.00th=[ 215], 00:27:37.160 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 275], 00:27:37.160 | 99.99th=[ 292] 00:27:37.160 bw ( KiB/s): min=76288, max=271360, per=8.31%, avg=162995.20, stdev=66416.04, samples=20 00:27:37.160 iops : min= 298, max= 1060, avg=636.70, stdev=259.44, samples=20 00:27:37.160 lat (usec) : 1000=0.06% 00:27:37.160 lat (msec) : 2=0.33%, 4=1.00%, 10=4.87%, 20=6.02%, 50=14.40% 00:27:37.160 lat (msec) : 100=27.13%, 250=45.51%, 500=0.68% 00:27:37.160 cpu : usr=0.27%, sys=2.26%, ctx=1366, majf=0, minf=3347 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=6431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job9: (groupid=0, jobs=1): err= 0: pid=1025110: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=650, BW=163MiB/s (171MB/s)(1638MiB/10071msec) 00:27:37.160 slat (usec): min=14, max=97515, avg=1027.89, stdev=5258.19 00:27:37.160 clat (usec): min=925, max=323259, avg=97261.05, stdev=63783.98 00:27:37.160 lat (usec): min=953, max=323297, avg=98288.94, stdev=64741.32 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 16], 20.00th=[ 35], 00:27:37.160 | 30.00th=[ 57], 40.00th=[ 70], 50.00th=[ 86], 60.00th=[ 112], 00:27:37.160 | 70.00th=[ 133], 80.00th=[ 159], 90.00th=[ 190], 95.00th=[ 209], 00:27:37.160 | 99.00th=[ 247], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 321], 00:27:37.160 | 99.99th=[ 326] 00:27:37.160 bw ( KiB/s): min=63488, max=433664, per=8.46%, avg=166067.20, stdev=86795.43, samples=20 00:27:37.160 iops : min= 248, max= 1694, avg=648.70, stdev=339.04, samples=20 00:27:37.160 lat (usec) : 1000=0.03% 00:27:37.160 lat (msec) : 2=0.31%, 4=1.92%, 10=3.57%, 20=6.24%, 50=14.65% 00:27:37.160 lat (msec) : 100=28.65%, 250=43.76%, 500=0.85% 00:27:37.160 cpu : usr=0.21%, sys=2.40%, ctx=1354, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=6551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 job10: (groupid=0, jobs=1): err= 0: pid=1025111: Fri Jul 12 11:33:21 2024 00:27:37.160 read: IOPS=922, BW=231MiB/s (242MB/s)(2336MiB/10126msec) 00:27:37.160 slat (usec): min=9, max=203504, avg=563.74, stdev=4023.23 00:27:37.160 clat (usec): min=1740, max=288296, avg=68702.82, stdev=55016.78 00:27:37.160 lat (usec): min=1797, max=465722, avg=69266.56, stdev=55424.22 00:27:37.160 clat percentiles (msec): 00:27:37.160 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 27], 00:27:37.160 | 30.00th=[ 31], 40.00th=[ 40], 50.00th=[ 53], 60.00th=[ 66], 00:27:37.160 | 70.00th=[ 83], 80.00th=[ 113], 90.00th=[ 161], 95.00th=[ 184], 00:27:37.160 | 99.00th=[ 228], 99.50th=[ 259], 99.90th=[ 279], 99.95th=[ 288], 00:27:37.160 | 99.99th=[ 288] 00:27:37.160 bw ( KiB/s): min=115712, max=528384, per=12.11%, avg=237644.80, stdev=118755.80, samples=20 00:27:37.160 iops : min= 452, max= 2064, avg=928.30, stdev=463.89, samples=20 00:27:37.160 lat (msec) : 2=0.02%, 4=0.62%, 10=7.08%, 20=5.45%, 50=35.22% 00:27:37.160 lat (msec) : 100=29.07%, 250=21.64%, 500=0.90% 00:27:37.160 cpu : usr=0.27%, sys=3.04%, ctx=1771, majf=0, minf=4097 00:27:37.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:37.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.160 issued rwts: total=9345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.160 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.160 00:27:37.160 Run status group 0 (all jobs): 00:27:37.160 READ: bw=1916MiB/s (2010MB/s), 151MiB/s-231MiB/s (158MB/s-242MB/s), io=19.0GiB (20.3GB), run=10069-10127msec 00:27:37.160 00:27:37.160 Disk stats (read/write): 00:27:37.160 nvme0n1: ios=14240/0, merge=0/0, ticks=1245347/0, in_queue=1245347, util=97.39% 00:27:37.160 nvme10n1: ios=12883/0, merge=0/0, ticks=1229464/0, in_queue=1229464, util=97.56% 00:27:37.160 nvme1n1: ios=14616/0, merge=0/0, ticks=1245821/0, in_queue=1245821, util=97.85% 00:27:37.160 nvme2n1: ios=17346/0, merge=0/0, ticks=1236142/0, in_queue=1236142, util=97.99% 00:27:37.160 nvme3n1: ios=12864/0, merge=0/0, ticks=1232353/0, in_queue=1232353, util=98.07% 00:27:37.161 nvme4n1: ios=12970/0, merge=0/0, ticks=1236369/0, in_queue=1236369, util=98.36% 00:27:37.161 nvme5n1: ios=12208/0, merge=0/0, ticks=1235016/0, in_queue=1235016, util=98.51% 00:27:37.161 nvme6n1: ios=12068/0, merge=0/0, ticks=1238202/0, in_queue=1238202, util=98.60% 00:27:37.161 nvme7n1: ios=12695/0, merge=0/0, ticks=1237972/0, in_queue=1237972, util=98.99% 00:27:37.161 nvme8n1: ios=12934/0, merge=0/0, ticks=1240723/0, in_queue=1240723, util=99.13% 00:27:37.161 nvme9n1: ios=18524/0, merge=0/0, ticks=1244954/0, in_queue=1244954, util=99.25% 00:27:37.161 11:33:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:37.161 [global] 00:27:37.161 thread=1 00:27:37.161 invalidate=1 00:27:37.161 rw=randwrite 00:27:37.161 time_based=1 00:27:37.161 runtime=10 00:27:37.161 ioengine=libaio 00:27:37.161 direct=1 00:27:37.161 bs=262144 00:27:37.161 iodepth=64 00:27:37.161 norandommap=1 00:27:37.161 numjobs=1 00:27:37.161 00:27:37.161 [job0] 00:27:37.161 filename=/dev/nvme0n1 00:27:37.161 [job1] 00:27:37.161 filename=/dev/nvme10n1 00:27:37.161 [job2] 00:27:37.161 filename=/dev/nvme1n1 00:27:37.161 [job3] 00:27:37.161 filename=/dev/nvme2n1 00:27:37.161 [job4] 00:27:37.161 filename=/dev/nvme3n1 00:27:37.161 [job5] 00:27:37.161 filename=/dev/nvme4n1 00:27:37.161 [job6] 00:27:37.161 filename=/dev/nvme5n1 00:27:37.161 [job7] 00:27:37.161 filename=/dev/nvme6n1 00:27:37.161 [job8] 00:27:37.161 filename=/dev/nvme7n1 00:27:37.161 [job9] 00:27:37.161 filename=/dev/nvme8n1 00:27:37.161 [job10] 00:27:37.161 filename=/dev/nvme9n1 00:27:37.161 Could not set queue depth (nvme0n1) 00:27:37.161 Could not set queue depth (nvme10n1) 00:27:37.161 Could not set queue depth (nvme1n1) 00:27:37.161 Could not set queue depth (nvme2n1) 00:27:37.161 Could not set queue depth (nvme3n1) 00:27:37.161 Could not set queue depth (nvme4n1) 00:27:37.161 Could not set queue depth (nvme5n1) 00:27:37.161 Could not set queue depth (nvme6n1) 00:27:37.161 Could not set queue depth (nvme7n1) 00:27:37.161 Could not set queue depth (nvme8n1) 00:27:37.161 Could not set queue depth (nvme9n1) 00:27:37.161 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:37.161 fio-3.35 00:27:37.161 Starting 11 threads 00:27:47.142 00:27:47.142 job0: (groupid=0, jobs=1): err= 0: pid=1026638: Fri Jul 12 11:33:32 2024 00:27:47.142 write: IOPS=386, BW=96.6MiB/s (101MB/s)(977MiB/10111msec); 0 zone resets 00:27:47.142 slat (usec): min=22, max=54400, avg=2112.35, stdev=4966.49 00:27:47.142 clat (usec): min=1468, max=292239, avg=163407.39, stdev=66199.52 00:27:47.142 lat (usec): min=1705, max=292276, avg=165519.74, stdev=67246.77 00:27:47.142 clat percentiles (msec): 00:27:47.142 | 1.00th=[ 6], 5.00th=[ 19], 10.00th=[ 44], 20.00th=[ 118], 00:27:47.142 | 30.00th=[ 146], 40.00th=[ 163], 50.00th=[ 182], 60.00th=[ 199], 00:27:47.142 | 70.00th=[ 207], 80.00th=[ 215], 90.00th=[ 228], 95.00th=[ 239], 00:27:47.142 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 292], 00:27:47.142 | 99.99th=[ 292] 00:27:47.142 bw ( KiB/s): min=65536, max=195072, per=6.81%, avg=98432.00, stdev=32915.41, samples=20 00:27:47.142 iops : min= 256, max= 762, avg=384.50, stdev=128.58, samples=20 00:27:47.142 lat (msec) : 2=0.08%, 4=0.36%, 10=2.51%, 20=2.41%, 50=5.45% 00:27:47.142 lat (msec) : 100=7.60%, 250=79.02%, 500=2.58% 00:27:47.142 cpu : usr=0.78%, sys=1.27%, ctx=1876, majf=0, minf=1 00:27:47.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:47.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.142 issued rwts: total=0,3908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.142 job1: (groupid=0, jobs=1): err= 0: pid=1026650: Fri Jul 12 11:33:32 2024 00:27:47.142 write: IOPS=418, BW=105MiB/s (110MB/s)(1057MiB/10114msec); 0 zone resets 00:27:47.142 slat (usec): min=17, max=77883, avg=2135.67, stdev=4869.38 00:27:47.142 clat (usec): min=1825, max=284068, avg=150824.26, stdev=69164.61 00:27:47.142 lat (usec): min=1895, max=296431, avg=152959.93, stdev=70196.80 00:27:47.142 clat percentiles (msec): 00:27:47.142 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 38], 20.00th=[ 90], 00:27:47.142 | 30.00th=[ 126], 40.00th=[ 146], 50.00th=[ 157], 60.00th=[ 180], 00:27:47.142 | 70.00th=[ 199], 80.00th=[ 218], 90.00th=[ 232], 95.00th=[ 243], 00:27:47.142 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 284], 99.95th=[ 284], 00:27:47.142 | 99.99th=[ 284] 00:27:47.142 bw ( KiB/s): min=65536, max=223744, per=7.38%, avg=106649.60, stdev=39770.21, samples=20 00:27:47.142 iops : min= 256, max= 874, avg=416.60, stdev=155.35, samples=20 00:27:47.142 lat (msec) : 2=0.02%, 4=0.26%, 10=1.61%, 20=2.58%, 50=9.10% 00:27:47.142 lat (msec) : 100=10.17%, 250=73.35%, 500=2.91% 00:27:47.142 cpu : usr=1.14%, sys=1.41%, ctx=1799, majf=0, minf=1 00:27:47.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:47.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.142 issued rwts: total=0,4229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.142 job2: (groupid=0, jobs=1): err= 0: pid=1026652: Fri Jul 12 11:33:32 2024 00:27:47.142 write: IOPS=411, BW=103MiB/s (108MB/s)(1043MiB/10135msec); 0 zone resets 00:27:47.142 slat (usec): min=26, max=40309, avg=2134.40, stdev=4695.91 00:27:47.142 clat (msec): min=3, max=269, avg=153.24, stdev=64.59 00:27:47.142 lat (msec): min=3, max=269, avg=155.37, stdev=65.52 00:27:47.142 clat percentiles (msec): 00:27:47.142 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 54], 20.00th=[ 93], 00:27:47.142 | 30.00th=[ 124], 40.00th=[ 146], 50.00th=[ 161], 60.00th=[ 176], 00:27:47.142 | 70.00th=[ 201], 80.00th=[ 213], 90.00th=[ 234], 95.00th=[ 243], 00:27:47.142 | 99.00th=[ 259], 99.50th=[ 266], 99.90th=[ 268], 99.95th=[ 271], 00:27:47.142 | 99.99th=[ 271] 00:27:47.142 bw ( KiB/s): min=65536, max=226304, per=7.28%, avg=105179.15, stdev=38791.39, samples=20 00:27:47.142 iops : min= 256, max= 884, avg=410.85, stdev=151.52, samples=20 00:27:47.142 lat (msec) : 4=0.07%, 10=1.27%, 20=1.99%, 50=4.82%, 100=13.76% 00:27:47.142 lat (msec) : 250=76.29%, 500=1.80% 00:27:47.142 cpu : usr=0.88%, sys=1.19%, ctx=1630, majf=0, minf=1 00:27:47.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:47.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.142 issued rwts: total=0,4171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.142 job3: (groupid=0, jobs=1): err= 0: pid=1026653: Fri Jul 12 11:33:32 2024 00:27:47.142 write: IOPS=380, BW=95.2MiB/s (99.8MB/s)(963MiB/10115msec); 0 zone resets 00:27:47.142 slat (usec): min=24, max=59883, avg=2393.67, stdev=5246.13 00:27:47.142 clat (usec): min=1998, max=285176, avg=165591.07, stdev=68132.28 00:27:47.142 lat (msec): min=2, max=285, avg=167.98, stdev=69.09 00:27:47.142 clat percentiles (msec): 00:27:47.142 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 70], 20.00th=[ 117], 00:27:47.142 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 180], 60.00th=[ 205], 00:27:47.142 | 70.00th=[ 218], 80.00th=[ 232], 90.00th=[ 245], 95.00th=[ 253], 00:27:47.142 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 284], 00:27:47.142 | 99.99th=[ 284] 00:27:47.142 bw ( KiB/s): min=63488, max=131072, per=6.71%, avg=96998.40, stdev=22208.79, samples=20 00:27:47.142 iops : min= 248, max= 512, avg=378.90, stdev=86.75, samples=20 00:27:47.142 lat (msec) : 2=0.03%, 4=0.21%, 10=1.69%, 20=2.18%, 50=2.96% 00:27:47.142 lat (msec) : 100=7.84%, 250=78.79%, 500=6.31% 00:27:47.142 cpu : usr=0.91%, sys=1.38%, ctx=1490, majf=0, minf=1 00:27:47.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:47.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.142 issued rwts: total=0,3852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.142 job4: (groupid=0, jobs=1): err= 0: pid=1026654: Fri Jul 12 11:33:32 2024 00:27:47.142 write: IOPS=439, BW=110MiB/s (115MB/s)(1107MiB/10075msec); 0 zone resets 00:27:47.142 slat (usec): min=26, max=65419, avg=1767.10, stdev=4160.75 00:27:47.142 clat (msec): min=2, max=272, avg=143.84, stdev=56.38 00:27:47.142 lat (msec): min=2, max=272, avg=145.60, stdev=56.99 00:27:47.142 clat percentiles (msec): 00:27:47.142 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 75], 20.00th=[ 111], 00:27:47.142 | 30.00th=[ 121], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 148], 00:27:47.142 | 70.00th=[ 176], 80.00th=[ 203], 90.00th=[ 220], 95.00th=[ 236], 00:27:47.142 | 99.00th=[ 259], 99.50th=[ 266], 99.90th=[ 268], 99.95th=[ 268], 00:27:47.142 | 99.99th=[ 271] 00:27:47.142 bw ( KiB/s): min=63488, max=167424, per=7.73%, avg=111718.40, stdev=28870.34, samples=20 00:27:47.142 iops : min= 248, max= 654, avg=436.40, stdev=112.77, samples=20 00:27:47.142 lat (msec) : 4=0.29%, 10=0.93%, 20=1.51%, 50=3.55%, 100=9.89% 00:27:47.142 lat (msec) : 250=81.25%, 500=2.58% 00:27:47.142 cpu : usr=0.99%, sys=1.27%, ctx=2012, majf=0, minf=1 00:27:47.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:47.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.142 issued rwts: total=0,4427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.142 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.142 job5: (groupid=0, jobs=1): err= 0: pid=1026655: Fri Jul 12 11:33:32 2024 00:27:47.142 write: IOPS=512, BW=128MiB/s (134MB/s)(1296MiB/10111msec); 0 zone resets 00:27:47.142 slat (usec): min=21, max=60301, avg=1657.88, stdev=3540.77 00:27:47.142 clat (usec): min=1754, max=265420, avg=123093.25, stdev=46088.40 00:27:47.142 lat (usec): min=1807, max=265464, avg=124751.13, stdev=46651.07 00:27:47.142 clat percentiles (msec): 00:27:47.142 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 66], 20.00th=[ 90], 00:27:47.142 | 30.00th=[ 104], 40.00th=[ 118], 50.00th=[ 126], 60.00th=[ 129], 00:27:47.142 | 70.00th=[ 146], 80.00th=[ 163], 90.00th=[ 180], 95.00th=[ 199], 00:27:47.142 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 255], 99.95th=[ 259], 00:27:47.142 | 99.99th=[ 266] 00:27:47.142 bw ( KiB/s): min=86016, max=183808, per=9.07%, avg=131139.55, stdev=28206.88, samples=20 00:27:47.143 iops : min= 336, max= 718, avg=512.25, stdev=110.17, samples=20 00:27:47.143 lat (msec) : 2=0.04%, 4=0.31%, 10=1.33%, 20=1.74%, 50=5.00% 00:27:47.143 lat (msec) : 100=19.86%, 250=71.51%, 500=0.21% 00:27:47.143 cpu : usr=1.10%, sys=1.46%, ctx=2073, majf=0, minf=1 00:27:47.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:47.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.143 issued rwts: total=0,5185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.143 job6: (groupid=0, jobs=1): err= 0: pid=1026656: Fri Jul 12 11:33:32 2024 00:27:47.143 write: IOPS=656, BW=164MiB/s (172MB/s)(1659MiB/10112msec); 0 zone resets 00:27:47.143 slat (usec): min=20, max=57114, avg=1156.37, stdev=3129.88 00:27:47.143 clat (usec): min=1140, max=262510, avg=96338.17, stdev=61214.23 00:27:47.143 lat (usec): min=1183, max=266267, avg=97494.54, stdev=61912.61 00:27:47.143 clat percentiles (msec): 00:27:47.143 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 47], 20.00th=[ 51], 00:27:47.143 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 72], 60.00th=[ 105], 00:27:47.143 | 70.00th=[ 125], 80.00th=[ 150], 90.00th=[ 190], 95.00th=[ 220], 00:27:47.143 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 259], 99.95th=[ 259], 00:27:47.143 | 99.99th=[ 264] 00:27:47.143 bw ( KiB/s): min=65536, max=319488, per=11.64%, avg=168268.80, stdev=82714.01, samples=20 00:27:47.143 iops : min= 256, max= 1248, avg=657.30, stdev=323.10, samples=20 00:27:47.143 lat (msec) : 2=0.14%, 4=0.36%, 10=1.73%, 20=2.08%, 50=15.34% 00:27:47.143 lat (msec) : 100=39.57%, 250=39.74%, 500=1.04% 00:27:47.143 cpu : usr=1.44%, sys=1.87%, ctx=3070, majf=0, minf=1 00:27:47.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:47.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.143 issued rwts: total=0,6636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.143 job7: (groupid=0, jobs=1): err= 0: pid=1026657: Fri Jul 12 11:33:32 2024 00:27:47.143 write: IOPS=698, BW=175MiB/s (183MB/s)(1756MiB/10061msec); 0 zone resets 00:27:47.143 slat (usec): min=24, max=58201, avg=1039.26, stdev=2856.28 00:27:47.143 clat (usec): min=1210, max=254760, avg=90582.79, stdev=54689.40 00:27:47.143 lat (usec): min=1275, max=257664, avg=91622.04, stdev=55300.24 00:27:47.143 clat percentiles (msec): 00:27:47.143 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 33], 20.00th=[ 48], 00:27:47.143 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 82], 60.00th=[ 92], 00:27:47.143 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 171], 95.00th=[ 201], 00:27:47.143 | 99.00th=[ 234], 99.50th=[ 243], 99.90th=[ 251], 99.95th=[ 253], 00:27:47.143 | 99.99th=[ 255] 00:27:47.143 bw ( KiB/s): min=86016, max=328192, per=12.33%, avg=178227.20, stdev=69392.61, samples=20 00:27:47.143 iops : min= 336, max= 1282, avg=696.20, stdev=271.06, samples=20 00:27:47.143 lat (msec) : 2=0.14%, 4=0.58%, 10=2.11%, 20=2.83%, 50=19.09% 00:27:47.143 lat (msec) : 100=38.11%, 250=37.01%, 500=0.13% 00:27:47.143 cpu : usr=1.52%, sys=1.96%, ctx=3697, majf=0, minf=1 00:27:47.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:47.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.143 issued rwts: total=0,7025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.143 job8: (groupid=0, jobs=1): err= 0: pid=1026658: Fri Jul 12 11:33:32 2024 00:27:47.143 write: IOPS=622, BW=156MiB/s (163MB/s)(1577MiB/10134msec); 0 zone resets 00:27:47.143 slat (usec): min=17, max=58102, avg=1197.76, stdev=3534.08 00:27:47.143 clat (usec): min=1328, max=268384, avg=101614.86, stdev=75485.77 00:27:47.143 lat (usec): min=1378, max=268432, avg=102812.62, stdev=76503.26 00:27:47.143 clat percentiles (msec): 00:27:47.143 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 47], 00:27:47.143 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 61], 60.00th=[ 85], 00:27:47.143 | 70.00th=[ 148], 80.00th=[ 201], 90.00th=[ 222], 95.00th=[ 234], 00:27:47.143 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 262], 00:27:47.143 | 99.99th=[ 268] 00:27:47.143 bw ( KiB/s): min=67584, max=323584, per=11.06%, avg=159847.85, stdev=94372.60, samples=20 00:27:47.143 iops : min= 264, max= 1264, avg=624.40, stdev=368.64, samples=20 00:27:47.143 lat (msec) : 2=0.21%, 4=0.65%, 10=3.11%, 20=3.60%, 50=30.24% 00:27:47.143 lat (msec) : 100=25.25%, 250=36.31%, 500=0.63% 00:27:47.143 cpu : usr=1.45%, sys=1.81%, ctx=3191, majf=0, minf=1 00:27:47.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:47.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.143 issued rwts: total=0,6306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.143 job9: (groupid=0, jobs=1): err= 0: pid=1026659: Fri Jul 12 11:33:32 2024 00:27:47.143 write: IOPS=617, BW=154MiB/s (162MB/s)(1561MiB/10115msec); 0 zone resets 00:27:47.143 slat (usec): min=25, max=91075, avg=1271.22, stdev=3775.96 00:27:47.143 clat (usec): min=1117, max=290314, avg=102201.65, stdev=69035.09 00:27:47.143 lat (usec): min=1163, max=293616, avg=103472.86, stdev=69929.04 00:27:47.143 clat percentiles (msec): 00:27:47.143 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 25], 20.00th=[ 51], 00:27:47.143 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 86], 60.00th=[ 97], 00:27:47.143 | 70.00th=[ 124], 80.00th=[ 157], 90.00th=[ 218], 95.00th=[ 243], 00:27:47.143 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 288], 99.95th=[ 288], 00:27:47.143 | 99.99th=[ 292] 00:27:47.143 bw ( KiB/s): min=63488, max=296448, per=10.95%, avg=158208.00, stdev=73341.68, samples=20 00:27:47.143 iops : min= 248, max= 1158, avg=618.00, stdev=286.49, samples=20 00:27:47.143 lat (msec) : 2=0.34%, 4=0.58%, 10=2.85%, 20=4.73%, 50=10.92% 00:27:47.143 lat (msec) : 100=40.96%, 250=35.93%, 500=3.70% 00:27:47.143 cpu : usr=1.37%, sys=1.74%, ctx=3089, majf=0, minf=1 00:27:47.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:47.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.143 issued rwts: total=0,6243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.143 job10: (groupid=0, jobs=1): err= 0: pid=1026660: Fri Jul 12 11:33:32 2024 00:27:47.143 write: IOPS=519, BW=130MiB/s (136MB/s)(1311MiB/10088msec); 0 zone resets 00:27:47.143 slat (usec): min=19, max=111312, avg=1532.01, stdev=3950.13 00:27:47.143 clat (msec): min=2, max=288, avg=121.26, stdev=55.67 00:27:47.143 lat (msec): min=3, max=292, avg=122.80, stdev=56.25 00:27:47.143 clat percentiles (msec): 00:27:47.143 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 38], 20.00th=[ 85], 00:27:47.143 | 30.00th=[ 104], 40.00th=[ 118], 50.00th=[ 124], 60.00th=[ 126], 00:27:47.143 | 70.00th=[ 129], 80.00th=[ 161], 90.00th=[ 205], 95.00th=[ 228], 00:27:47.143 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 288], 00:27:47.143 | 99.99th=[ 288] 00:27:47.143 bw ( KiB/s): min=77312, max=208384, per=9.17%, avg=132608.00, stdev=36673.38, samples=20 00:27:47.143 iops : min= 302, max= 814, avg=518.00, stdev=143.26, samples=20 00:27:47.143 lat (msec) : 4=0.04%, 10=1.11%, 20=3.40%, 50=7.78%, 100=16.99% 00:27:47.143 lat (msec) : 250=68.76%, 500=1.93% 00:27:47.143 cpu : usr=1.20%, sys=1.52%, ctx=2422, majf=0, minf=1 00:27:47.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:47.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.143 issued rwts: total=0,5243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.143 00:27:47.143 Run status group 0 (all jobs): 00:27:47.143 WRITE: bw=1412MiB/s (1480MB/s), 95.2MiB/s-175MiB/s (99.8MB/s-183MB/s), io=14.0GiB (15.0GB), run=10061-10135msec 00:27:47.143 00:27:47.143 Disk stats (read/write): 00:27:47.143 nvme0n1: ios=49/7608, merge=0/0, ticks=81/1212062, in_queue=1212143, util=97.72% 00:27:47.143 nvme10n1: ios=47/8254, merge=0/0, ticks=503/1208403, in_queue=1208906, util=100.00% 00:27:47.143 nvme1n1: ios=40/8140, merge=0/0, ticks=699/1212072, in_queue=1212771, util=100.00% 00:27:47.143 nvme2n1: ios=0/7522, merge=0/0, ticks=0/1205950, in_queue=1205950, util=97.71% 00:27:47.143 nvme3n1: ios=0/8600, merge=0/0, ticks=0/1217976, in_queue=1217976, util=97.78% 00:27:47.143 nvme4n1: ios=0/10162, merge=0/0, ticks=0/1209539, in_queue=1209539, util=98.11% 00:27:47.143 nvme5n1: ios=0/13013, merge=0/0, ticks=0/1217459, in_queue=1217459, util=98.23% 00:27:47.143 nvme6n1: ios=46/13792, merge=0/0, ticks=753/1216954, in_queue=1217707, util=100.00% 00:27:47.143 nvme7n1: ios=0/12417, merge=0/0, ticks=0/1216184, in_queue=1216184, util=98.76% 00:27:47.143 nvme8n1: ios=38/12239, merge=0/0, ticks=628/1211005, in_queue=1211633, util=100.00% 00:27:47.143 nvme9n1: ios=49/10301, merge=0/0, ticks=693/1209154, in_queue=1209847, util=100.00% 00:27:47.143 11:33:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:47.143 11:33:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:47.143 11:33:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.143 11:33:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:47.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.143 11:33:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.144 11:33:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:47.712 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:47.712 11:33:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:47.712 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:47.712 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:47.712 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:47.712 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:47.712 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:47.712 11:33:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:47.712 11:33:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:47.712 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.712 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:47.712 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.712 11:33:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.712 11:33:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:48.280 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.280 11:33:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:48.848 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:48.848 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:48.848 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:48.848 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:48.848 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:48.848 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:48.848 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:48.849 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:48.849 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:48.849 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.849 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:48.849 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.849 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.849 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:49.418 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.418 11:33:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:50.009 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:50.009 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.010 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.010 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.010 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.010 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:50.269 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.269 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:50.528 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.528 11:33:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:50.787 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:50.787 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:51.047 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:51.047 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:51.616 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:51.616 rmmod nvme_tcp 00:27:51.616 rmmod nvme_fabrics 00:27:51.616 rmmod nvme_keyring 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1017918 ']' 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1017918 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1017918 ']' 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1017918 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1017918 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1017918' 00:27:51.616 killing process with pid 1017918 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1017918 00:27:51.616 11:33:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1017918 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.805 11:33:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.181 11:33:43 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:57.181 00:27:57.181 real 1m16.857s 00:27:57.181 user 4m33.184s 00:27:57.181 sys 0m22.408s 00:27:57.181 11:33:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.181 11:33:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:57.181 ************************************ 00:27:57.181 END TEST nvmf_multiconnection 00:27:57.181 ************************************ 00:27:57.181 11:33:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:57.181 11:33:43 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:57.181 11:33:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:57.181 11:33:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.181 11:33:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:57.181 ************************************ 00:27:57.181 START TEST nvmf_initiator_timeout 00:27:57.181 ************************************ 00:27:57.181 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:57.440 * Looking for test storage... 00:27:57.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.440 11:33:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:02.707 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:02.707 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:02.707 Found net devices under 0000:86:00.0: cvl_0_0 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:02.707 Found net devices under 0000:86:00.1: cvl_0_1 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.707 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:02.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:28:02.708 00:28:02.708 --- 10.0.0.2 ping statistics --- 00:28:02.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.708 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:28:02.708 00:28:02.708 --- 10.0.0.1 ping statistics --- 00:28:02.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.708 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1032570 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1032570 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1032570 ']' 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.708 11:33:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:02.708 [2024-07-12 11:33:48.510739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:02.708 [2024-07-12 11:33:48.510825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.708 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.708 [2024-07-12 11:33:48.618461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.708 [2024-07-12 11:33:48.836525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.708 [2024-07-12 11:33:48.836570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.708 [2024-07-12 11:33:48.836582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.708 [2024-07-12 11:33:48.836590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.708 [2024-07-12 11:33:48.836599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.708 [2024-07-12 11:33:48.836669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.708 [2024-07-12 11:33:48.836745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.708 [2024-07-12 11:33:48.836803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.708 [2024-07-12 11:33:48.836814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.966 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.966 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:02.966 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.966 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.966 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.225 Malloc0 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.225 Delay0 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.225 [2024-07-12 11:33:49.433352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:03.225 [2024-07-12 11:33:49.461600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.225 11:33:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:04.602 11:33:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:04.602 11:33:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:04.602 11:33:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:04.602 11:33:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:04.602 11:33:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1033288 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:06.507 11:33:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:06.507 [global] 00:28:06.507 thread=1 00:28:06.507 invalidate=1 00:28:06.507 rw=write 00:28:06.507 time_based=1 00:28:06.507 runtime=60 00:28:06.507 ioengine=libaio 00:28:06.507 direct=1 00:28:06.507 bs=4096 00:28:06.507 iodepth=1 00:28:06.507 norandommap=0 00:28:06.507 numjobs=1 00:28:06.507 00:28:06.507 verify_dump=1 00:28:06.507 verify_backlog=512 00:28:06.507 verify_state_save=0 00:28:06.507 do_verify=1 00:28:06.507 verify=crc32c-intel 00:28:06.507 [job0] 00:28:06.507 filename=/dev/nvme0n1 00:28:06.507 Could not set queue depth (nvme0n1) 00:28:06.766 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:06.767 fio-3.35 00:28:06.767 Starting 1 thread 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.301 true 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.301 true 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.301 true 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.301 true 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.301 11:33:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:12.601 true 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:12.601 true 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:12.601 true 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:12.601 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.602 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:12.602 true 00:28:12.602 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.602 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:12.602 11:33:58 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1033288 00:29:08.831 00:29:08.831 job0: (groupid=0, jobs=1): err= 0: pid=1033408: Fri Jul 12 11:34:53 2024 00:29:08.831 read: IOPS=218, BW=873KiB/s (894kB/s)(51.2MiB/60012msec) 00:29:08.831 slat (nsec): min=6216, max=44727, avg=8062.09, stdev=2620.20 00:29:08.831 clat (usec): min=202, max=42066, avg=1184.48, stdev=6010.87 00:29:08.831 lat (usec): min=209, max=42087, avg=1192.54, stdev=6012.92 00:29:08.831 clat percentiles (usec): 00:29:08.831 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:29:08.831 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:29:08.831 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 469], 00:29:08.831 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:29:08.831 | 99.99th=[41681] 00:29:08.831 write: IOPS=221, BW=887KiB/s (909kB/s)(52.0MiB/60012msec); 0 zone resets 00:29:08.831 slat (usec): min=8, max=36533, avg=17.14, stdev=406.27 00:29:08.831 clat (usec): min=155, max=41428k, avg=3311.92, stdev=359061.56 00:29:08.831 lat (usec): min=166, max=41428k, avg=3329.06, stdev=359061.74 00:29:08.831 clat percentiles (usec): 00:29:08.831 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:29:08.831 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:29:08.831 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 239], 00:29:08.831 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 367], 00:29:08.831 | 99.99th=[ 3294] 00:29:08.831 bw ( KiB/s): min= 648, max= 8768, per=100.00%, avg=7606.86, stdev=2022.98, samples=14 00:29:08.831 iops : min= 162, max= 2192, avg=1901.71, stdev=505.75, samples=14 00:29:08.831 lat (usec) : 250=56.78%, 500=41.85%, 750=0.23%, 1000=0.01% 00:29:08.831 lat (msec) : 2=0.02%, 4=0.01%, 50=1.11%, >=2000=0.01% 00:29:08.831 cpu : usr=0.30%, sys=0.56%, ctx=26419, majf=0, minf=2 00:29:08.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.831 issued rwts: total=13100,13312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:08.831 00:29:08.831 Run status group 0 (all jobs): 00:29:08.831 READ: bw=873KiB/s (894kB/s), 873KiB/s-873KiB/s (894kB/s-894kB/s), io=51.2MiB (53.7MB), run=60012-60012msec 00:29:08.831 WRITE: bw=887KiB/s (909kB/s), 887KiB/s-887KiB/s (909kB/s-909kB/s), io=52.0MiB (54.5MB), run=60012-60012msec 00:29:08.831 00:29:08.831 Disk stats (read/write): 00:29:08.831 nvme0n1: ios=13151/13312, merge=0/0, ticks=15914/2562, in_queue=18476, util=99.79% 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:08.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:08.831 nvmf hotplug test: fio successful as expected 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:08.831 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.832 rmmod nvme_tcp 00:29:08.832 rmmod nvme_fabrics 00:29:08.832 rmmod nvme_keyring 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1032570 ']' 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1032570 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1032570 ']' 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1032570 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032570 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032570' 00:29:08.832 killing process with pid 1032570 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1032570 00:29:08.832 11:34:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1032570 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.832 11:34:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.793 11:34:57 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:10.793 00:29:10.793 real 1m13.556s 00:29:10.793 user 4m28.314s 00:29:10.793 sys 0m6.028s 00:29:10.793 11:34:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:10.793 11:34:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:10.793 ************************************ 00:29:10.793 END TEST nvmf_initiator_timeout 00:29:10.793 ************************************ 00:29:10.793 11:34:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:10.793 11:34:57 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:29:10.793 11:34:57 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:29:10.793 11:34:57 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:29:10.793 11:34:57 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:29:10.793 11:34:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:16.068 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.068 11:35:01 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:16.068 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:16.069 Found net devices under 0000:86:00.0: cvl_0_0 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:16.069 Found net devices under 0000:86:00.1: cvl_0_1 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:29:16.069 11:35:01 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:16.069 11:35:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:16.069 11:35:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:16.069 11:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.069 ************************************ 00:29:16.069 START TEST nvmf_perf_adq 00:29:16.069 ************************************ 00:29:16.069 11:35:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:16.069 * Looking for test storage... 00:29:16.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.069 11:35:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:21.340 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:21.340 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:21.340 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:21.341 Found net devices under 0000:86:00.0: cvl_0_0 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:21.341 Found net devices under 0000:86:00.1: cvl_0_1 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:29:21.341 11:35:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:21.600 11:35:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:23.504 11:35:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:28.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.793 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:28.794 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:28.794 Found net devices under 0000:86:00.0: cvl_0_0 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:28.794 Found net devices under 0000:86:00.1: cvl_0_1 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:28.794 11:35:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:28.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:29:28.794 00:29:28.794 --- 10.0.0.2 ping statistics --- 00:29:28.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.794 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:29:28.794 00:29:28.794 --- 10.0.0.1 ping statistics --- 00:29:28.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.794 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1050959 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1050959 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1050959 ']' 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.794 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:28.794 [2024-07-12 11:35:15.144283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:28.794 [2024-07-12 11:35:15.144391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.053 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.053 [2024-07-12 11:35:15.255095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.312 [2024-07-12 11:35:15.475084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.312 [2024-07-12 11:35:15.475128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.312 [2024-07-12 11:35:15.475141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.312 [2024-07-12 11:35:15.475150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.312 [2024-07-12 11:35:15.475160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.312 [2024-07-12 11:35:15.475274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.312 [2024-07-12 11:35:15.475313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.312 [2024-07-12 11:35:15.475399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.312 [2024-07-12 11:35:15.475402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.571 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.571 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:29.571 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:29.571 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.571 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.830 11:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.831 11:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.831 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.831 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:29.831 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.831 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.089 [2024-07-12 11:35:16.392853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.089 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.349 Malloc1 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:30.349 [2024-07-12 11:35:16.502766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1051211 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:29:30.349 11:35:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:30.349 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:29:32.253 "tick_rate": 2300000000, 00:29:32.253 "poll_groups": [ 00:29:32.253 { 00:29:32.253 "name": "nvmf_tgt_poll_group_000", 00:29:32.253 "admin_qpairs": 1, 00:29:32.253 "io_qpairs": 1, 00:29:32.253 "current_admin_qpairs": 1, 00:29:32.253 "current_io_qpairs": 1, 00:29:32.253 "pending_bdev_io": 0, 00:29:32.253 "completed_nvme_io": 18295, 00:29:32.253 "transports": [ 00:29:32.253 { 00:29:32.253 "trtype": "TCP" 00:29:32.253 } 00:29:32.253 ] 00:29:32.253 }, 00:29:32.253 { 00:29:32.253 "name": "nvmf_tgt_poll_group_001", 00:29:32.253 "admin_qpairs": 0, 00:29:32.253 "io_qpairs": 1, 00:29:32.253 "current_admin_qpairs": 0, 00:29:32.253 "current_io_qpairs": 1, 00:29:32.253 "pending_bdev_io": 0, 00:29:32.253 "completed_nvme_io": 18522, 00:29:32.253 "transports": [ 00:29:32.253 { 00:29:32.253 "trtype": "TCP" 00:29:32.253 } 00:29:32.253 ] 00:29:32.253 }, 00:29:32.253 { 00:29:32.253 "name": "nvmf_tgt_poll_group_002", 00:29:32.253 "admin_qpairs": 0, 00:29:32.253 "io_qpairs": 1, 00:29:32.253 "current_admin_qpairs": 0, 00:29:32.253 "current_io_qpairs": 1, 00:29:32.253 "pending_bdev_io": 0, 00:29:32.253 "completed_nvme_io": 18601, 00:29:32.253 "transports": [ 00:29:32.253 { 00:29:32.253 "trtype": "TCP" 00:29:32.253 } 00:29:32.253 ] 00:29:32.253 }, 00:29:32.253 { 00:29:32.253 "name": "nvmf_tgt_poll_group_003", 00:29:32.253 "admin_qpairs": 0, 00:29:32.253 "io_qpairs": 1, 00:29:32.253 "current_admin_qpairs": 0, 00:29:32.253 "current_io_qpairs": 1, 00:29:32.253 "pending_bdev_io": 0, 00:29:32.253 "completed_nvme_io": 18327, 00:29:32.253 "transports": [ 00:29:32.253 { 00:29:32.253 "trtype": "TCP" 00:29:32.253 } 00:29:32.253 ] 00:29:32.253 } 00:29:32.253 ] 00:29:32.253 }' 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:29:32.253 11:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1051211 00:29:40.377 Initializing NVMe Controllers 00:29:40.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:40.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:40.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:40.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:40.377 Initialization complete. Launching workers. 00:29:40.377 ======================================================== 00:29:40.377 Latency(us) 00:29:40.377 Device Information : IOPS MiB/s Average min max 00:29:40.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10027.48 39.17 6383.01 2829.80 10633.96 00:29:40.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10067.48 39.33 6356.63 1949.32 11942.09 00:29:40.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9938.38 38.82 6440.89 2086.26 10387.86 00:29:40.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9887.38 38.62 6471.68 2595.73 11587.09 00:29:40.377 ======================================================== 00:29:40.377 Total : 39920.74 155.94 6412.73 1949.32 11942.09 00:29:40.377 00:29:40.377 11:35:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:40.377 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:40.377 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:40.377 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.377 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:40.377 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.377 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.636 rmmod nvme_tcp 00:29:40.636 rmmod nvme_fabrics 00:29:40.636 rmmod nvme_keyring 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1050959 ']' 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1050959 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1050959 ']' 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1050959 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1050959 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1050959' 00:29:40.636 killing process with pid 1050959 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1050959 00:29:40.636 11:35:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1050959 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.050 11:35:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.584 11:35:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:44.584 11:35:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:44.584 11:35:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:45.521 11:35:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:47.423 11:35:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:52.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:52.797 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:52.797 Found net devices under 0000:86:00.0: cvl_0_0 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:52.797 Found net devices under 0000:86:00.1: cvl_0_1 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:52.797 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:52.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:29:52.798 00:29:52.798 --- 10.0.0.2 ping statistics --- 00:29:52.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.798 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:29:52.798 00:29:52.798 --- 10.0.0.1 ping statistics --- 00:29:52.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.798 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:52.798 net.core.busy_poll = 1 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:52.798 net.core.busy_read = 1 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:52.798 11:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1055222 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1055222 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1055222 ']' 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.798 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.056 [2024-07-12 11:35:39.173166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:53.056 [2024-07-12 11:35:39.173248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.056 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.056 [2024-07-12 11:35:39.282495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.314 [2024-07-12 11:35:39.511223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.314 [2024-07-12 11:35:39.511264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.314 [2024-07-12 11:35:39.511277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.314 [2024-07-12 11:35:39.511285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.314 [2024-07-12 11:35:39.511295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.314 [2024-07-12 11:35:39.511365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.314 [2024-07-12 11:35:39.511446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.314 [2024-07-12 11:35:39.511473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.314 [2024-07-12 11:35:39.511484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.882 11:35:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.882 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.141 [2024-07-12 11:35:40.442648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.141 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.400 Malloc1 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:54.400 [2024-07-12 11:35:40.561802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1055470 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:54.400 11:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:54.400 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:56.306 "tick_rate": 2300000000, 00:29:56.306 "poll_groups": [ 00:29:56.306 { 00:29:56.306 "name": "nvmf_tgt_poll_group_000", 00:29:56.306 "admin_qpairs": 1, 00:29:56.306 "io_qpairs": 3, 00:29:56.306 "current_admin_qpairs": 1, 00:29:56.306 "current_io_qpairs": 3, 00:29:56.306 "pending_bdev_io": 0, 00:29:56.306 "completed_nvme_io": 26012, 00:29:56.306 "transports": [ 00:29:56.306 { 00:29:56.306 "trtype": "TCP" 00:29:56.306 } 00:29:56.306 ] 00:29:56.306 }, 00:29:56.306 { 00:29:56.306 "name": "nvmf_tgt_poll_group_001", 00:29:56.306 "admin_qpairs": 0, 00:29:56.306 "io_qpairs": 1, 00:29:56.306 "current_admin_qpairs": 0, 00:29:56.306 "current_io_qpairs": 1, 00:29:56.306 "pending_bdev_io": 0, 00:29:56.306 "completed_nvme_io": 25356, 00:29:56.306 "transports": [ 00:29:56.306 { 00:29:56.306 "trtype": "TCP" 00:29:56.306 } 00:29:56.306 ] 00:29:56.306 }, 00:29:56.306 { 00:29:56.306 "name": "nvmf_tgt_poll_group_002", 00:29:56.306 "admin_qpairs": 0, 00:29:56.306 "io_qpairs": 0, 00:29:56.306 "current_admin_qpairs": 0, 00:29:56.306 "current_io_qpairs": 0, 00:29:56.306 "pending_bdev_io": 0, 00:29:56.306 "completed_nvme_io": 0, 00:29:56.306 "transports": [ 00:29:56.306 { 00:29:56.306 "trtype": "TCP" 00:29:56.306 } 00:29:56.306 ] 00:29:56.306 }, 00:29:56.306 { 00:29:56.306 "name": "nvmf_tgt_poll_group_003", 00:29:56.306 "admin_qpairs": 0, 00:29:56.306 "io_qpairs": 0, 00:29:56.306 "current_admin_qpairs": 0, 00:29:56.306 "current_io_qpairs": 0, 00:29:56.306 "pending_bdev_io": 0, 00:29:56.306 "completed_nvme_io": 0, 00:29:56.306 "transports": [ 00:29:56.306 { 00:29:56.306 "trtype": "TCP" 00:29:56.306 } 00:29:56.306 ] 00:29:56.306 } 00:29:56.306 ] 00:29:56.306 }' 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:56.306 11:35:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1055470 00:30:06.288 Initializing NVMe Controllers 00:30:06.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:06.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:06.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:06.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:06.288 Initialization complete. Launching workers. 00:30:06.288 ======================================================== 00:30:06.288 Latency(us) 00:30:06.288 Device Information : IOPS MiB/s Average min max 00:30:06.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4648.50 18.16 13769.60 2044.59 63807.49 00:30:06.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4725.00 18.46 13572.12 2122.37 61843.50 00:30:06.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4510.00 17.62 14236.39 2118.11 60786.76 00:30:06.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14020.50 54.77 4564.39 1924.60 9513.33 00:30:06.288 ======================================================== 00:30:06.288 Total : 27903.99 109.00 9186.40 1924.60 63807.49 00:30:06.288 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.288 rmmod nvme_tcp 00:30:06.288 rmmod nvme_fabrics 00:30:06.288 rmmod nvme_keyring 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1055222 ']' 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1055222 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1055222 ']' 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1055222 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1055222 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1055222' 00:30:06.288 killing process with pid 1055222 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1055222 00:30:06.288 11:35:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1055222 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.288 11:35:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.193 11:35:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:08.193 11:35:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:08.193 00:30:08.193 real 0m52.553s 00:30:08.193 user 2m58.444s 00:30:08.193 sys 0m9.390s 00:30:08.193 11:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:08.193 11:35:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.193 ************************************ 00:30:08.193 END TEST nvmf_perf_adq 00:30:08.193 ************************************ 00:30:08.453 11:35:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:08.453 11:35:54 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:08.453 11:35:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:08.453 11:35:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.453 11:35:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.453 ************************************ 00:30:08.453 START TEST nvmf_shutdown 00:30:08.453 ************************************ 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:08.453 * Looking for test storage... 00:30:08.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:08.453 11:35:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:08.454 ************************************ 00:30:08.454 START TEST nvmf_shutdown_tc1 00:30:08.454 ************************************ 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:08.454 11:35:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:13.725 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:13.725 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:13.725 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:13.726 Found net devices under 0000:86:00.0: cvl_0_0 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:13.726 Found net devices under 0000:86:00.1: cvl_0_1 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:13.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:30:13.726 00:30:13.726 --- 10.0.0.2 ping statistics --- 00:30:13.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.726 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:30:13.726 00:30:13.726 --- 10.0.0.1 ping statistics --- 00:30:13.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.726 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1060728 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1060728 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1060728 ']' 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.726 11:35:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:13.726 [2024-07-12 11:35:59.887288] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:13.726 [2024-07-12 11:35:59.887375] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.726 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.726 [2024-07-12 11:35:59.997486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.984 [2024-07-12 11:36:00.228576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.984 [2024-07-12 11:36:00.228626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.984 [2024-07-12 11:36:00.228638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.984 [2024-07-12 11:36:00.228647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.984 [2024-07-12 11:36:00.228657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.984 [2024-07-12 11:36:00.228807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.984 [2024-07-12 11:36:00.228874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.984 [2024-07-12 11:36:00.228971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.984 [2024-07-12 11:36:00.228994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:14.567 [2024-07-12 11:36:00.713200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.567 11:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:14.567 Malloc1 00:30:14.567 [2024-07-12 11:36:00.882313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.825 Malloc2 00:30:14.825 Malloc3 00:30:14.825 Malloc4 00:30:15.083 Malloc5 00:30:15.084 Malloc6 00:30:15.342 Malloc7 00:30:15.342 Malloc8 00:30:15.601 Malloc9 00:30:15.601 Malloc10 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1061306 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1061306 /var/tmp/bdevperf.sock 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1061306 ']' 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.601 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.601 { 00:30:15.601 "params": { 00:30:15.601 "name": "Nvme$subsystem", 00:30:15.601 "trtype": "$TEST_TRANSPORT", 00:30:15.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.602 "adrfam": "ipv4", 00:30:15.602 "trsvcid": "$NVMF_PORT", 00:30:15.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.602 "hdgst": ${hdgst:-false}, 00:30:15.602 "ddgst": ${ddgst:-false} 00:30:15.602 }, 00:30:15.602 "method": "bdev_nvme_attach_controller" 00:30:15.602 } 00:30:15.602 EOF 00:30:15.602 )") 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.602 { 00:30:15.602 "params": { 00:30:15.602 "name": "Nvme$subsystem", 00:30:15.602 "trtype": "$TEST_TRANSPORT", 00:30:15.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.602 "adrfam": "ipv4", 00:30:15.602 "trsvcid": "$NVMF_PORT", 00:30:15.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.602 "hdgst": ${hdgst:-false}, 00:30:15.602 "ddgst": ${ddgst:-false} 00:30:15.602 }, 00:30:15.602 "method": "bdev_nvme_attach_controller" 00:30:15.602 } 00:30:15.602 EOF 00:30:15.602 )") 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.602 { 00:30:15.602 "params": { 00:30:15.602 "name": "Nvme$subsystem", 00:30:15.602 "trtype": "$TEST_TRANSPORT", 00:30:15.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.602 "adrfam": "ipv4", 00:30:15.602 "trsvcid": "$NVMF_PORT", 00:30:15.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.602 "hdgst": ${hdgst:-false}, 00:30:15.602 "ddgst": ${ddgst:-false} 00:30:15.602 }, 00:30:15.602 "method": "bdev_nvme_attach_controller" 00:30:15.602 } 00:30:15.602 EOF 00:30:15.602 )") 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.602 { 00:30:15.602 "params": { 00:30:15.602 "name": "Nvme$subsystem", 00:30:15.602 "trtype": "$TEST_TRANSPORT", 00:30:15.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.602 "adrfam": "ipv4", 00:30:15.602 "trsvcid": "$NVMF_PORT", 00:30:15.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.602 "hdgst": ${hdgst:-false}, 00:30:15.602 "ddgst": ${ddgst:-false} 00:30:15.602 }, 00:30:15.602 "method": "bdev_nvme_attach_controller" 00:30:15.602 } 00:30:15.602 EOF 00:30:15.602 )") 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.602 { 00:30:15.602 "params": { 00:30:15.602 "name": "Nvme$subsystem", 00:30:15.602 "trtype": "$TEST_TRANSPORT", 00:30:15.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.602 "adrfam": "ipv4", 00:30:15.602 "trsvcid": "$NVMF_PORT", 00:30:15.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.602 "hdgst": ${hdgst:-false}, 00:30:15.602 "ddgst": ${ddgst:-false} 00:30:15.602 }, 00:30:15.602 "method": "bdev_nvme_attach_controller" 00:30:15.602 } 00:30:15.602 EOF 00:30:15.602 )") 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.602 { 00:30:15.602 "params": { 00:30:15.602 "name": "Nvme$subsystem", 00:30:15.602 "trtype": "$TEST_TRANSPORT", 00:30:15.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.602 "adrfam": "ipv4", 00:30:15.602 "trsvcid": "$NVMF_PORT", 00:30:15.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.602 "hdgst": ${hdgst:-false}, 00:30:15.602 "ddgst": ${ddgst:-false} 00:30:15.602 }, 00:30:15.602 "method": "bdev_nvme_attach_controller" 00:30:15.602 } 00:30:15.602 EOF 00:30:15.602 )") 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.602 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.602 { 00:30:15.602 "params": { 00:30:15.602 "name": "Nvme$subsystem", 00:30:15.602 "trtype": "$TEST_TRANSPORT", 00:30:15.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.602 "adrfam": "ipv4", 00:30:15.602 "trsvcid": "$NVMF_PORT", 00:30:15.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.602 "hdgst": ${hdgst:-false}, 00:30:15.602 "ddgst": ${ddgst:-false} 00:30:15.602 }, 00:30:15.602 "method": "bdev_nvme_attach_controller" 00:30:15.602 } 00:30:15.602 EOF 00:30:15.602 )") 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.862 { 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme$subsystem", 00:30:15.862 "trtype": "$TEST_TRANSPORT", 00:30:15.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "$NVMF_PORT", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.862 "hdgst": ${hdgst:-false}, 00:30:15.862 "ddgst": ${ddgst:-false} 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 } 00:30:15.862 EOF 00:30:15.862 )") 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.862 { 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme$subsystem", 00:30:15.862 "trtype": "$TEST_TRANSPORT", 00:30:15.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "$NVMF_PORT", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.862 "hdgst": ${hdgst:-false}, 00:30:15.862 "ddgst": ${ddgst:-false} 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 } 00:30:15.862 EOF 00:30:15.862 )") 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.862 { 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme$subsystem", 00:30:15.862 "trtype": "$TEST_TRANSPORT", 00:30:15.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "$NVMF_PORT", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.862 "hdgst": ${hdgst:-false}, 00:30:15.862 "ddgst": ${ddgst:-false} 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 } 00:30:15.862 EOF 00:30:15.862 )") 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:15.862 [2024-07-12 11:36:01.986652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:15.862 [2024-07-12 11:36:01.986737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:15.862 11:36:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme1", 00:30:15.862 "trtype": "tcp", 00:30:15.862 "traddr": "10.0.0.2", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "4420", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.862 "hdgst": false, 00:30:15.862 "ddgst": false 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 },{ 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme2", 00:30:15.862 "trtype": "tcp", 00:30:15.862 "traddr": "10.0.0.2", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "4420", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:15.862 "hdgst": false, 00:30:15.862 "ddgst": false 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 },{ 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme3", 00:30:15.862 "trtype": "tcp", 00:30:15.862 "traddr": "10.0.0.2", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "4420", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:15.862 "hdgst": false, 00:30:15.862 "ddgst": false 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 },{ 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme4", 00:30:15.862 "trtype": "tcp", 00:30:15.862 "traddr": "10.0.0.2", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "4420", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:15.862 "hdgst": false, 00:30:15.862 "ddgst": false 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 },{ 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme5", 00:30:15.862 "trtype": "tcp", 00:30:15.862 "traddr": "10.0.0.2", 00:30:15.862 "adrfam": "ipv4", 00:30:15.862 "trsvcid": "4420", 00:30:15.862 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:15.862 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:15.862 "hdgst": false, 00:30:15.862 "ddgst": false 00:30:15.862 }, 00:30:15.862 "method": "bdev_nvme_attach_controller" 00:30:15.862 },{ 00:30:15.862 "params": { 00:30:15.862 "name": "Nvme6", 00:30:15.863 "trtype": "tcp", 00:30:15.863 "traddr": "10.0.0.2", 00:30:15.863 "adrfam": "ipv4", 00:30:15.863 "trsvcid": "4420", 00:30:15.863 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:15.863 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:15.863 "hdgst": false, 00:30:15.863 "ddgst": false 00:30:15.863 }, 00:30:15.863 "method": "bdev_nvme_attach_controller" 00:30:15.863 },{ 00:30:15.863 "params": { 00:30:15.863 "name": "Nvme7", 00:30:15.863 "trtype": "tcp", 00:30:15.863 "traddr": "10.0.0.2", 00:30:15.863 "adrfam": "ipv4", 00:30:15.863 "trsvcid": "4420", 00:30:15.863 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:15.863 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:15.863 "hdgst": false, 00:30:15.863 "ddgst": false 00:30:15.863 }, 00:30:15.863 "method": "bdev_nvme_attach_controller" 00:30:15.863 },{ 00:30:15.863 "params": { 00:30:15.863 "name": "Nvme8", 00:30:15.863 "trtype": "tcp", 00:30:15.863 "traddr": "10.0.0.2", 00:30:15.863 "adrfam": "ipv4", 00:30:15.863 "trsvcid": "4420", 00:30:15.863 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:15.863 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:15.863 "hdgst": false, 00:30:15.863 "ddgst": false 00:30:15.863 }, 00:30:15.863 "method": "bdev_nvme_attach_controller" 00:30:15.863 },{ 00:30:15.863 "params": { 00:30:15.863 "name": "Nvme9", 00:30:15.863 "trtype": "tcp", 00:30:15.863 "traddr": "10.0.0.2", 00:30:15.863 "adrfam": "ipv4", 00:30:15.863 "trsvcid": "4420", 00:30:15.863 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:15.863 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:15.863 "hdgst": false, 00:30:15.863 "ddgst": false 00:30:15.863 }, 00:30:15.863 "method": "bdev_nvme_attach_controller" 00:30:15.863 },{ 00:30:15.863 "params": { 00:30:15.863 "name": "Nvme10", 00:30:15.863 "trtype": "tcp", 00:30:15.863 "traddr": "10.0.0.2", 00:30:15.863 "adrfam": "ipv4", 00:30:15.863 "trsvcid": "4420", 00:30:15.863 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:15.863 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:15.863 "hdgst": false, 00:30:15.863 "ddgst": false 00:30:15.863 }, 00:30:15.863 "method": "bdev_nvme_attach_controller" 00:30:15.863 }' 00:30:15.863 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.863 [2024-07-12 11:36:02.093555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.122 [2024-07-12 11:36:02.328098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1061306 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:30:18.655 11:36:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:30:19.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1061306 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1060728 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.223 { 00:30:19.223 "params": { 00:30:19.223 "name": "Nvme$subsystem", 00:30:19.223 "trtype": "$TEST_TRANSPORT", 00:30:19.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.223 "adrfam": "ipv4", 00:30:19.223 "trsvcid": "$NVMF_PORT", 00:30:19.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.223 "hdgst": ${hdgst:-false}, 00:30:19.223 "ddgst": ${ddgst:-false} 00:30:19.223 }, 00:30:19.223 "method": "bdev_nvme_attach_controller" 00:30:19.223 } 00:30:19.223 EOF 00:30:19.223 )") 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.223 { 00:30:19.223 "params": { 00:30:19.223 "name": "Nvme$subsystem", 00:30:19.223 "trtype": "$TEST_TRANSPORT", 00:30:19.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.223 "adrfam": "ipv4", 00:30:19.223 "trsvcid": "$NVMF_PORT", 00:30:19.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.223 "hdgst": ${hdgst:-false}, 00:30:19.223 "ddgst": ${ddgst:-false} 00:30:19.223 }, 00:30:19.223 "method": "bdev_nvme_attach_controller" 00:30:19.223 } 00:30:19.223 EOF 00:30:19.223 )") 00:30:19.223 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.224 { 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme$subsystem", 00:30:19.224 "trtype": "$TEST_TRANSPORT", 00:30:19.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "$NVMF_PORT", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.224 "hdgst": ${hdgst:-false}, 00:30:19.224 "ddgst": ${ddgst:-false} 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 } 00:30:19.224 EOF 00:30:19.224 )") 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:19.224 [2024-07-12 11:36:05.554365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:19.224 [2024-07-12 11:36:05.554460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061826 ] 00:30:19.224 11:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme1", 00:30:19.224 "trtype": "tcp", 00:30:19.224 "traddr": "10.0.0.2", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "4420", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:19.224 "hdgst": false, 00:30:19.224 "ddgst": false 00:30:19.224 }, 00:30:19.224 "method": "bdev_nvme_attach_controller" 00:30:19.224 },{ 00:30:19.224 "params": { 00:30:19.224 "name": "Nvme2", 00:30:19.224 "trtype": "tcp", 00:30:19.224 "traddr": "10.0.0.2", 00:30:19.224 "adrfam": "ipv4", 00:30:19.224 "trsvcid": "4420", 00:30:19.224 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme3", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme4", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme5", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme6", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme7", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme8", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme9", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 },{ 00:30:19.225 "params": { 00:30:19.225 "name": "Nvme10", 00:30:19.225 "trtype": "tcp", 00:30:19.225 "traddr": "10.0.0.2", 00:30:19.225 "adrfam": "ipv4", 00:30:19.225 "trsvcid": "4420", 00:30:19.225 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:19.225 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:19.225 "hdgst": false, 00:30:19.225 "ddgst": false 00:30:19.225 }, 00:30:19.225 "method": "bdev_nvme_attach_controller" 00:30:19.225 }' 00:30:19.484 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.484 [2024-07-12 11:36:05.660880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.742 [2024-07-12 11:36:05.899464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.645 Running I/O for 1 seconds... 00:30:22.580 00:30:22.580 Latency(us) 00:30:22.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.580 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme1n1 : 1.07 243.04 15.19 0.00 0.00 258490.01 8548.17 240716.58 00:30:22.580 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme2n1 : 1.11 229.66 14.35 0.00 0.00 270920.13 26670.30 244363.80 00:30:22.580 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme3n1 : 1.07 243.80 15.24 0.00 0.00 245041.32 15500.69 233422.14 00:30:22.580 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme4n1 : 1.20 267.46 16.72 0.00 0.00 223223.63 12993.22 244363.80 00:30:22.580 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme5n1 : 1.11 231.12 14.45 0.00 0.00 254596.01 20401.64 242540.19 00:30:22.580 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme6n1 : 1.21 265.08 16.57 0.00 0.00 217930.35 7066.49 248011.02 00:30:22.580 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme7n1 : 1.12 228.73 14.30 0.00 0.00 248383.00 15158.76 244363.80 00:30:22.580 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme8n1 : 1.21 264.23 16.51 0.00 0.00 212560.50 14930.81 251658.24 00:30:22.580 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme9n1 : 1.21 263.80 16.49 0.00 0.00 209163.98 6240.17 251658.24 00:30:22.580 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:22.580 Verification LBA range: start 0x0 length 0x400 00:30:22.580 Nvme10n1 : 1.22 261.62 16.35 0.00 0.00 208147.86 14816.83 269894.34 00:30:22.580 =================================================================================================================== 00:30:22.580 Total : 2498.53 156.16 0.00 0.00 232618.78 6240.17 269894.34 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:23.959 rmmod nvme_tcp 00:30:23.959 rmmod nvme_fabrics 00:30:23.959 rmmod nvme_keyring 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1060728 ']' 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1060728 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1060728 ']' 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1060728 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1060728 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1060728' 00:30:23.959 killing process with pid 1060728 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1060728 00:30:23.959 11:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1060728 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.247 11:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.229 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:29.229 00:30:29.229 real 0m20.820s 00:30:29.229 user 0m59.285s 00:30:29.229 sys 0m5.514s 00:30:29.229 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:29.229 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:29.229 ************************************ 00:30:29.229 END TEST nvmf_shutdown_tc1 00:30:29.229 ************************************ 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:29.489 ************************************ 00:30:29.489 START TEST nvmf_shutdown_tc2 00:30:29.489 ************************************ 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.489 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:29.490 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:29.490 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:29.490 Found net devices under 0000:86:00.0: cvl_0_0 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:29.490 Found net devices under 0000:86:00.1: cvl_0_1 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.490 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:29.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:30:29.750 00:30:29.750 --- 10.0.0.2 ping statistics --- 00:30:29.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.750 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:30:29.750 00:30:29.750 --- 10.0.0.1 ping statistics --- 00:30:29.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.750 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1064124 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1064124 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1064124 ']' 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.750 11:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.750 [2024-07-12 11:36:15.979281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:29.750 [2024-07-12 11:36:15.979369] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.750 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.750 [2024-07-12 11:36:16.086977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.009 [2024-07-12 11:36:16.296410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.009 [2024-07-12 11:36:16.296459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.009 [2024-07-12 11:36:16.296471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.009 [2024-07-12 11:36:16.296480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.009 [2024-07-12 11:36:16.296489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.009 [2024-07-12 11:36:16.296576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.009 [2024-07-12 11:36:16.296652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.009 [2024-07-12 11:36:16.296751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.009 [2024-07-12 11:36:16.296775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.576 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.576 [2024-07-12 11:36:16.799536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.577 11:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.835 Malloc1 00:30:30.835 [2024-07-12 11:36:16.962628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.835 Malloc2 00:30:30.835 Malloc3 00:30:31.093 Malloc4 00:30:31.093 Malloc5 00:30:31.352 Malloc6 00:30:31.352 Malloc7 00:30:31.611 Malloc8 00:30:31.611 Malloc9 00:30:31.611 Malloc10 00:30:31.870 11:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.870 11:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:31.870 11:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:31.870 11:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1064412 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1064412 /var/tmp/bdevperf.sock 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1064412 ']' 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:31.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.870 { 00:30:31.870 "params": { 00:30:31.870 "name": "Nvme$subsystem", 00:30:31.870 "trtype": "$TEST_TRANSPORT", 00:30:31.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.870 "adrfam": "ipv4", 00:30:31.870 "trsvcid": "$NVMF_PORT", 00:30:31.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.870 "hdgst": ${hdgst:-false}, 00:30:31.870 "ddgst": ${ddgst:-false} 00:30:31.870 }, 00:30:31.870 "method": "bdev_nvme_attach_controller" 00:30:31.870 } 00:30:31.870 EOF 00:30:31.870 )") 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.870 { 00:30:31.870 "params": { 00:30:31.870 "name": "Nvme$subsystem", 00:30:31.870 "trtype": "$TEST_TRANSPORT", 00:30:31.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.870 "adrfam": "ipv4", 00:30:31.870 "trsvcid": "$NVMF_PORT", 00:30:31.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.870 "hdgst": ${hdgst:-false}, 00:30:31.870 "ddgst": ${ddgst:-false} 00:30:31.870 }, 00:30:31.870 "method": "bdev_nvme_attach_controller" 00:30:31.870 } 00:30:31.870 EOF 00:30:31.870 )") 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.870 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.871 { 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme$subsystem", 00:30:31.871 "trtype": "$TEST_TRANSPORT", 00:30:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "$NVMF_PORT", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.871 "hdgst": ${hdgst:-false}, 00:30:31.871 "ddgst": ${ddgst:-false} 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 } 00:30:31.871 EOF 00:30:31.871 )") 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:30:31.871 11:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme1", 00:30:31.871 "trtype": "tcp", 00:30:31.871 "traddr": "10.0.0.2", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "4420", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.871 "hdgst": false, 00:30:31.871 "ddgst": false 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 },{ 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme2", 00:30:31.871 "trtype": "tcp", 00:30:31.871 "traddr": "10.0.0.2", 00:30:31.871 "adrfam": "ipv4", 00:30:31.871 "trsvcid": "4420", 00:30:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:31.871 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:31.871 "hdgst": false, 00:30:31.871 "ddgst": false 00:30:31.871 }, 00:30:31.871 "method": "bdev_nvme_attach_controller" 00:30:31.871 },{ 00:30:31.871 "params": { 00:30:31.871 "name": "Nvme3", 00:30:31.871 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 },{ 00:30:31.872 "params": { 00:30:31.872 "name": "Nvme4", 00:30:31.872 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 },{ 00:30:31.872 "params": { 00:30:31.872 "name": "Nvme5", 00:30:31.872 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 },{ 00:30:31.872 "params": { 00:30:31.872 "name": "Nvme6", 00:30:31.872 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 },{ 00:30:31.872 "params": { 00:30:31.872 "name": "Nvme7", 00:30:31.872 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 },{ 00:30:31.872 "params": { 00:30:31.872 "name": "Nvme8", 00:30:31.872 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 },{ 00:30:31.872 "params": { 00:30:31.872 "name": "Nvme9", 00:30:31.872 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 },{ 00:30:31.872 "params": { 00:30:31.872 "name": "Nvme10", 00:30:31.872 "trtype": "tcp", 00:30:31.872 "traddr": "10.0.0.2", 00:30:31.872 "adrfam": "ipv4", 00:30:31.872 "trsvcid": "4420", 00:30:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:31.872 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:31.872 "hdgst": false, 00:30:31.872 "ddgst": false 00:30:31.872 }, 00:30:31.872 "method": "bdev_nvme_attach_controller" 00:30:31.872 }' 00:30:31.872 [2024-07-12 11:36:18.085051] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:31.872 [2024-07-12 11:36:18.085142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064412 ] 00:30:31.872 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.872 [2024-07-12 11:36:18.189749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.131 [2024-07-12 11:36:18.418827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.034 Running I/O for 10 seconds... 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:34.294 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:34.553 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:34.553 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:34.553 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:34.553 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:34.553 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.553 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1064412 00:30:34.812 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1064412 ']' 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1064412 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1064412 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1064412' 00:30:34.813 killing process with pid 1064412 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1064412 00:30:34.813 11:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1064412 00:30:34.813 Received shutdown signal, test time was about 0.819677 seconds 00:30:34.813 00:30:34.813 Latency(us) 00:30:34.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.813 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme1n1 : 0.78 247.74 15.48 0.00 0.00 254793.46 19261.89 238892.97 00:30:34.813 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme2n1 : 0.80 239.47 14.97 0.00 0.00 258062.54 20059.71 244363.80 00:30:34.813 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme3n1 : 0.78 253.89 15.87 0.00 0.00 234940.12 7522.39 240716.58 00:30:34.813 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme4n1 : 0.78 268.22 16.76 0.00 0.00 214865.27 16982.37 237069.36 00:30:34.813 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme5n1 : 0.80 241.03 15.06 0.00 0.00 239272.59 21427.42 240716.58 00:30:34.813 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme6n1 : 0.79 249.53 15.60 0.00 0.00 222499.42 7636.37 222480.47 00:30:34.813 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme7n1 : 0.79 244.45 15.28 0.00 0.00 224274.70 16868.40 240716.58 00:30:34.813 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme8n1 : 0.81 238.03 14.88 0.00 0.00 225585.72 17780.20 249834.63 00:30:34.813 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme9n1 : 0.81 236.71 14.79 0.00 0.00 221488.83 18692.01 251658.24 00:30:34.813 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.813 Verification LBA range: start 0x0 length 0x400 00:30:34.813 Nvme10n1 : 0.82 234.44 14.65 0.00 0.00 218423.65 18350.08 269894.34 00:30:34.813 =================================================================================================================== 00:30:34.813 Total : 2453.51 153.34 0.00 0.00 231253.79 7522.39 269894.34 00:30:36.190 11:36:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:30:37.125 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1064124 00:30:37.125 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:30:37.125 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:37.125 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:37.125 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:37.125 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:37.125 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.126 rmmod nvme_tcp 00:30:37.126 rmmod nvme_fabrics 00:30:37.126 rmmod nvme_keyring 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1064124 ']' 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1064124 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1064124 ']' 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1064124 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1064124 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1064124' 00:30:37.126 killing process with pid 1064124 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1064124 00:30:37.126 11:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1064124 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.411 11:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:42.946 00:30:42.946 real 0m13.139s 00:30:42.946 user 0m44.190s 00:30:42.946 sys 0m1.573s 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:42.946 ************************************ 00:30:42.946 END TEST nvmf_shutdown_tc2 00:30:42.946 ************************************ 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:42.946 ************************************ 00:30:42.946 START TEST nvmf_shutdown_tc3 00:30:42.946 ************************************ 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.946 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:42.947 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:42.947 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:42.947 Found net devices under 0000:86:00.0: cvl_0_0 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:42.947 Found net devices under 0000:86:00.1: cvl_0_1 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.947 11:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.947 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.947 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.947 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:30:42.947 00:30:42.947 --- 10.0.0.2 ping statistics --- 00:30:42.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.947 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:30:42.948 00:30:42.948 --- 10.0.0.1 ping statistics --- 00:30:42.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.948 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1066358 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1066358 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1066358 ']' 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:42.948 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:42.948 [2024-07-12 11:36:29.147178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:42.948 [2024-07-12 11:36:29.147269] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.948 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.948 [2024-07-12 11:36:29.256687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.207 [2024-07-12 11:36:29.474104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.207 [2024-07-12 11:36:29.474149] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.207 [2024-07-12 11:36:29.474161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.207 [2024-07-12 11:36:29.474170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.207 [2024-07-12 11:36:29.474179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.207 [2024-07-12 11:36:29.474312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.207 [2024-07-12 11:36:29.474388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.207 [2024-07-12 11:36:29.474458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.207 [2024-07-12 11:36:29.474481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:43.775 [2024-07-12 11:36:29.976057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.775 11:36:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:43.775 Malloc1 00:30:44.034 [2024-07-12 11:36:30.144421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.034 Malloc2 00:30:44.034 Malloc3 00:30:44.293 Malloc4 00:30:44.293 Malloc5 00:30:44.552 Malloc6 00:30:44.552 Malloc7 00:30:44.810 Malloc8 00:30:44.810 Malloc9 00:30:44.810 Malloc10 00:30:44.810 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.810 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:44.810 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:44.811 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1066645 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1066645 /var/tmp/bdevperf.sock 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1066645 ']' 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.070 { 00:30:45.070 "params": { 00:30:45.070 "name": "Nvme$subsystem", 00:30:45.070 "trtype": "$TEST_TRANSPORT", 00:30:45.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.070 "adrfam": "ipv4", 00:30:45.070 "trsvcid": "$NVMF_PORT", 00:30:45.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.070 "hdgst": ${hdgst:-false}, 00:30:45.070 "ddgst": ${ddgst:-false} 00:30:45.070 }, 00:30:45.070 "method": "bdev_nvme_attach_controller" 00:30:45.070 } 00:30:45.070 EOF 00:30:45.070 )") 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.070 { 00:30:45.070 "params": { 00:30:45.070 "name": "Nvme$subsystem", 00:30:45.070 "trtype": "$TEST_TRANSPORT", 00:30:45.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.070 "adrfam": "ipv4", 00:30:45.070 "trsvcid": "$NVMF_PORT", 00:30:45.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.070 "hdgst": ${hdgst:-false}, 00:30:45.070 "ddgst": ${ddgst:-false} 00:30:45.070 }, 00:30:45.070 "method": "bdev_nvme_attach_controller" 00:30:45.070 } 00:30:45.070 EOF 00:30:45.070 )") 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.070 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.070 { 00:30:45.070 "params": { 00:30:45.070 "name": "Nvme$subsystem", 00:30:45.070 "trtype": "$TEST_TRANSPORT", 00:30:45.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.071 { 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme$subsystem", 00:30:45.071 "trtype": "$TEST_TRANSPORT", 00:30:45.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.071 { 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme$subsystem", 00:30:45.071 "trtype": "$TEST_TRANSPORT", 00:30:45.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.071 { 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme$subsystem", 00:30:45.071 "trtype": "$TEST_TRANSPORT", 00:30:45.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.071 { 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme$subsystem", 00:30:45.071 "trtype": "$TEST_TRANSPORT", 00:30:45.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.071 { 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme$subsystem", 00:30:45.071 "trtype": "$TEST_TRANSPORT", 00:30:45.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.071 { 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme$subsystem", 00:30:45.071 "trtype": "$TEST_TRANSPORT", 00:30:45.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.071 { 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme$subsystem", 00:30:45.071 "trtype": "$TEST_TRANSPORT", 00:30:45.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "$NVMF_PORT", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.071 "hdgst": ${hdgst:-false}, 00:30:45.071 "ddgst": ${ddgst:-false} 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 } 00:30:45.071 EOF 00:30:45.071 )") 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:45.071 11:36:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme1", 00:30:45.071 "trtype": "tcp", 00:30:45.071 "traddr": "10.0.0.2", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "4420", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.071 "hdgst": false, 00:30:45.071 "ddgst": false 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 },{ 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme2", 00:30:45.071 "trtype": "tcp", 00:30:45.071 "traddr": "10.0.0.2", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "4420", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:45.071 "hdgst": false, 00:30:45.071 "ddgst": false 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 },{ 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme3", 00:30:45.071 "trtype": "tcp", 00:30:45.071 "traddr": "10.0.0.2", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "4420", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:45.071 "hdgst": false, 00:30:45.071 "ddgst": false 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 },{ 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme4", 00:30:45.071 "trtype": "tcp", 00:30:45.071 "traddr": "10.0.0.2", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "4420", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:45.071 "hdgst": false, 00:30:45.071 "ddgst": false 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 },{ 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme5", 00:30:45.071 "trtype": "tcp", 00:30:45.071 "traddr": "10.0.0.2", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "4420", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:45.071 "hdgst": false, 00:30:45.071 "ddgst": false 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 },{ 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme6", 00:30:45.071 "trtype": "tcp", 00:30:45.071 "traddr": "10.0.0.2", 00:30:45.071 "adrfam": "ipv4", 00:30:45.071 "trsvcid": "4420", 00:30:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:45.071 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:45.071 "hdgst": false, 00:30:45.071 "ddgst": false 00:30:45.071 }, 00:30:45.071 "method": "bdev_nvme_attach_controller" 00:30:45.071 },{ 00:30:45.071 "params": { 00:30:45.071 "name": "Nvme7", 00:30:45.071 "trtype": "tcp", 00:30:45.071 "traddr": "10.0.0.2", 00:30:45.071 "adrfam": "ipv4", 00:30:45.072 "trsvcid": "4420", 00:30:45.072 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:45.072 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:45.072 "hdgst": false, 00:30:45.072 "ddgst": false 00:30:45.072 }, 00:30:45.072 "method": "bdev_nvme_attach_controller" 00:30:45.072 },{ 00:30:45.072 "params": { 00:30:45.072 "name": "Nvme8", 00:30:45.072 "trtype": "tcp", 00:30:45.072 "traddr": "10.0.0.2", 00:30:45.072 "adrfam": "ipv4", 00:30:45.072 "trsvcid": "4420", 00:30:45.072 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:45.072 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:45.072 "hdgst": false, 00:30:45.072 "ddgst": false 00:30:45.072 }, 00:30:45.072 "method": "bdev_nvme_attach_controller" 00:30:45.072 },{ 00:30:45.072 "params": { 00:30:45.072 "name": "Nvme9", 00:30:45.072 "trtype": "tcp", 00:30:45.072 "traddr": "10.0.0.2", 00:30:45.072 "adrfam": "ipv4", 00:30:45.072 "trsvcid": "4420", 00:30:45.072 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:45.072 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:45.072 "hdgst": false, 00:30:45.072 "ddgst": false 00:30:45.072 }, 00:30:45.072 "method": "bdev_nvme_attach_controller" 00:30:45.072 },{ 00:30:45.072 "params": { 00:30:45.072 "name": "Nvme10", 00:30:45.072 "trtype": "tcp", 00:30:45.072 "traddr": "10.0.0.2", 00:30:45.072 "adrfam": "ipv4", 00:30:45.072 "trsvcid": "4420", 00:30:45.072 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:45.072 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:45.072 "hdgst": false, 00:30:45.072 "ddgst": false 00:30:45.072 }, 00:30:45.072 "method": "bdev_nvme_attach_controller" 00:30:45.072 }' 00:30:45.072 [2024-07-12 11:36:31.257244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:45.072 [2024-07-12 11:36:31.257337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066645 ] 00:30:45.072 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.072 [2024-07-12 11:36:31.363068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.331 [2024-07-12 11:36:31.600777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.860 Running I/O for 10 seconds... 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:47.860 11:36:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:47.860 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:48.117 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:48.117 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:48.117 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:48.117 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:48.117 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.117 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:48.117 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1066358 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1066358 ']' 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1066358 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1066358 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1066358' 00:30:48.389 killing process with pid 1066358 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1066358 00:30:48.389 11:36:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1066358 00:30:48.389 [2024-07-12 11:36:34.542830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.542887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.542914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.542926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.542940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.542950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.542963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.542973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.542985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same [2024-07-12 11:36:34.543784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12with the state(5) to be set 00:30:48.389 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same [2024-07-12 11:36:34.543817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:30:48.389 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same [2024-07-12 11:36:34.543879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12with the state(5) to be set 00:30:48.389 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:36:34.543939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.543982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.543988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.543992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.544002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.544014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.544033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.544042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.544052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.544062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.544081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.544091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.544101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.544111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.544131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.389 [2024-07-12 11:36:34.544141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.389 [2024-07-12 11:36:34.544147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.389 [2024-07-12 11:36:34.544150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same [2024-07-12 11:36:34.544171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:12with the state(5) to be set 00:30:48.390 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.544373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.544388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.544699] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000333400 was disconnected and freed. reset controller. 00:30:48.390 [2024-07-12 11:36:34.547793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.390 [2024-07-12 11:36:34.547879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:48.390 [2024-07-12 11:36:34.549760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.390 [2024-07-12 11:36:34.549794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:30:48.390 [2024-07-12 11:36:34.549807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:48.390 [2024-07-12 11:36:34.549886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.549902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.549927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.549938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.549951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.549962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.549974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.549984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.549997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.550983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.550994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.551004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.551015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.551025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.551036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.551045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.551056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.390 [2024-07-12 11:36:34.551066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.390 [2024-07-12 11:36:34.551078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.391 [2024-07-12 11:36:34.551305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.551316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333680 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.551634] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000333680 was disconnected and freed. reset controller. 00:30:48.391 [2024-07-12 11:36:34.552363] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.391 [2024-07-12 11:36:34.552419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:48.391 [2024-07-12 11:36:34.552478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331600 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-12 11:36:34.552657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:30:48.391 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-12 11:36:34.552777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same id:0 cdw10:00000000 cdw11:00000000 00:30:48.391 with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-12 11:36:34.552790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:30:48.391 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.391 [2024-07-12 11:36:34.552856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-12 11:36:34.552865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:30:48.391 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.391 [2024-07-12 11:36:34.552876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.552990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.553254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.554457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:48.391 [2024-07-12 11:36:34.554495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:48.391 [2024-07-12 11:36:34.554510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.391 [2024-07-12 11:36:34.554520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.391 [2024-07-12 11:36:34.554532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.391 [2024-07-12 11:36:34.554979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.391 [2024-07-12 11:36:34.556036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.391 [2024-07-12 11:36:34.556065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:30:48.391 [2024-07-12 11:36:34.556077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.391 [2024-07-12 11:36:34.556395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:48.392 [2024-07-12 11:36:34.556578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556658] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.392 [2024-07-12 11:36:34.556664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.556718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.557103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:48.392 [2024-07-12 11:36:34.557125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:48.392 [2024-07-12 11:36:34.557136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:48.392 [2024-07-12 11:36:34.557588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.392 [2024-07-12 11:36:34.558258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.558845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.559239] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.392 [2024-07-12 11:36:34.559568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.392 [2024-07-12 11:36:34.560566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.392 [2024-07-12 11:36:34.560594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:30:48.392 [2024-07-12 11:36:34.560608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.560805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:48.392 [2024-07-12 11:36:34.561022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.392 [2024-07-12 11:36:34.561040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.392 [2024-07-12 11:36:34.561051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.392 [2024-07-12 11:36:34.561243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.392 [2024-07-12 11:36:34.561623] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.392 [2024-07-12 11:36:34.562369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.563057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.563195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331600 (9): Bad file descriptor 00:30:48.392 [2024-07-12 11:36:34.563259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.392 [2024-07-12 11:36:34.563336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.392 [2024-07-12 11:36:34.563346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f080 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.563365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:30:48.392 [2024-07-12 11:36:34.564671] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000334080 was disconnected and freed. reset controller. 00:30:48.392 [2024-07-12 11:36:34.566123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:48.392 [2024-07-12 11:36:34.566180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f800 (9): Bad file descriptor 00:30:48.392 [2024-07-12 11:36:34.566357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:48.392 [2024-07-12 11:36:34.566694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.392 [2024-07-12 11:36:34.566718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032f800 with addr=10.0.0.2, port=4420 00:30:48.392 [2024-07-12 11:36:34.566729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f800 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.566904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.392 [2024-07-12 11:36:34.566918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:30:48.392 [2024-07-12 11:36:34.566927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f800 (9): Bad file descriptor 00:30:48.392 [2024-07-12 11:36:34.567087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:48.392 [2024-07-12 11:36:34.567227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:48.392 [2024-07-12 11:36:34.567244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:48.392 [2024-07-12 11:36:34.567254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:48.392 [2024-07-12 11:36:34.567271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:48.392 [2024-07-12 11:36:34.567281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:48.392 [2024-07-12 11:36:34.567291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:48.392 [2024-07-12 11:36:34.567438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.392 [2024-07-12 11:36:34.567453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.392 [2024-07-12 11:36:34.567721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.392 [2024-07-12 11:36:34.567989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.567998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:30:48.393 [2024-07-12 11:36:34.568821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.568845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.568866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.568878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.568890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.568901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.568913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.568924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.568937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.568947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.568963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.568974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.568986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.568996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.569984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.569994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.393 [2024-07-12 11:36:34.570195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.393 [2024-07-12 11:36:34.570206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.570217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.570227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.570238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.570249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.570259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.570270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.570280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.570290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334300 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570571] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x615000334300 was disconnected and freed. reset controller. 00:30:48.394 [2024-07-12 11:36:34.570578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.570990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.571000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.571008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.571016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.571025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.571034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.571765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:48.394 [2024-07-12 11:36:34.571822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ff80 (9): Bad file descriptor 00:30:48.394 [2024-07-12 11:36:34.571931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.394 [2024-07-12 11:36:34.572613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.394 [2024-07-12 11:36:34.572639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:30:48.394 [2024-07-12 11:36:34.572651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032ff80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.572852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.394 [2024-07-12 11:36:34.572866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:30:48.394 [2024-07-12 11:36:34.572876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.572997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ff80 (9): Bad file descriptor 00:30:48.394 [2024-07-12 11:36:34.573016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:48.394 [2024-07-12 11:36:34.573096] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:48.394 [2024-07-12 11:36:34.573120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:48.394 [2024-07-12 11:36:34.573131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:48.394 [2024-07-12 11:36:34.573141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:48.394 [2024-07-12 11:36:34.573158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.394 [2024-07-12 11:36:34.573168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.394 [2024-07-12 11:36:34.573176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.394 [2024-07-12 11:36:34.573195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e900 (9): Bad file descriptor 00:30:48.394 [2024-07-12 11:36:34.573242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330700 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.573370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.394 [2024-07-12 11:36:34.573455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330e80 is same with the state(5) to be set 00:30:48.394 [2024-07-12 11:36:34.573486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f080 (9): Bad file descriptor 00:30:48.394 [2024-07-12 11:36:34.573565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.394 [2024-07-12 11:36:34.573578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.394 [2024-07-12 11:36:34.573645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.573986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.573997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.394 [2024-07-12 11:36:34.574433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.394 [2024-07-12 11:36:34.574445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.574979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.574990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.575000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.575011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.575023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.575035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.575045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.575056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.575067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.575079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.575089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.575100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333900 is same with the state(5) to be set 00:30:48.395 [2024-07-12 11:36:34.576445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.576981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.576993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.395 [2024-07-12 11:36:34.577534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.395 [2024-07-12 11:36:34.577544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.577905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.577916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334a80 is same with the state(5) to be set 00:30:48.396 [2024-07-12 11:36:34.583198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:48.396 [2024-07-12 11:36:34.583230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:48.396 [2024-07-12 11:36:34.583335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330700 (9): Bad file descriptor 00:30:48.396 [2024-07-12 11:36:34.583363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330e80 (9): Bad file descriptor 00:30:48.396 [2024-07-12 11:36:34.583675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.396 [2024-07-12 11:36:34.583698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=10.0.0.2, port=4420 00:30:48.396 [2024-07-12 11:36:34.583710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(5) to be set 00:30:48.396 [2024-07-12 11:36:34.583924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.396 [2024-07-12 11:36:34.583940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000331600 with addr=10.0.0.2, port=4420 00:30:48.396 [2024-07-12 11:36:34.583950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331600 is same with the state(5) to be set 00:30:48.396 [2024-07-12 11:36:34.584397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.584978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.584991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.396 [2024-07-12 11:36:34.585870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.396 [2024-07-12 11:36:34.585883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.585899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.585912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.585922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.585934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333b80 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.587241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.587985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.587997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.397 [2024-07-12 11:36:34.588689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.397 [2024-07-12 11:36:34.588699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333e00 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.590395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:48.397 [2024-07-12 11:36:34.590421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:48.397 [2024-07-12 11:36:34.590434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.397 [2024-07-12 11:36:34.590449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:48.397 [2024-07-12 11:36:34.590464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:48.397 [2024-07-12 11:36:34.590476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:48.397 [2024-07-12 11:36:34.590532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:30:48.397 [2024-07-12 11:36:34.590549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331600 (9): Bad file descriptor 00:30:48.397 [2024-07-12 11:36:34.590902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.397 [2024-07-12 11:36:34.590922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:30:48.397 [2024-07-12 11:36:34.590933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.591159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.397 [2024-07-12 11:36:34.591173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032f800 with addr=10.0.0.2, port=4420 00:30:48.397 [2024-07-12 11:36:34.591187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f800 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.591360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.397 [2024-07-12 11:36:34.591374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:30:48.397 [2024-07-12 11:36:34.591390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.591596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.397 [2024-07-12 11:36:34.591609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:30:48.397 [2024-07-12 11:36:34.591619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032ff80 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.591785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.397 [2024-07-12 11:36:34.591798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e900 with addr=10.0.0.2, port=4420 00:30:48.397 [2024-07-12 11:36:34.591808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.591896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.397 [2024-07-12 11:36:34.591909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032f080 with addr=10.0.0.2, port=4420 00:30:48.397 [2024-07-12 11:36:34.591919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f080 is same with the state(5) to be set 00:30:48.397 [2024-07-12 11:36:34.591928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:48.397 [2024-07-12 11:36:34.591937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:48.397 [2024-07-12 11:36:34.591949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:48.398 [2024-07-12 11:36:34.591966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:48.398 [2024-07-12 11:36:34.591975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:48.398 [2024-07-12 11:36:34.591984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:48.398 [2024-07-12 11:36:34.592820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.592841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.592855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:48.398 [2024-07-12 11:36:34.592869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f800 (9): Bad file descriptor 00:30:48.398 [2024-07-12 11:36:34.592881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:48.398 [2024-07-12 11:36:34.592893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ff80 (9): Bad file descriptor 00:30:48.398 [2024-07-12 11:36:34.592905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e900 (9): Bad file descriptor 00:30:48.398 [2024-07-12 11:36:34.592917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f080 (9): Bad file descriptor 00:30:48.398 [2024-07-12 11:36:34.592980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:48.398 [2024-07-12 11:36:34.592991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:48.398 [2024-07-12 11:36:34.593003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:48.398 [2024-07-12 11:36:34.593018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:48.398 [2024-07-12 11:36:34.593027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:48.398 [2024-07-12 11:36:34.593036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:48.398 [2024-07-12 11:36:34.593049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.398 [2024-07-12 11:36:34.593057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.398 [2024-07-12 11:36:34.593065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.398 [2024-07-12 11:36:34.593078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:48.398 [2024-07-12 11:36:34.593086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:48.398 [2024-07-12 11:36:34.593096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:48.398 [2024-07-12 11:36:34.593109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:48.398 [2024-07-12 11:36:34.593118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:48.398 [2024-07-12 11:36:34.593126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:48.398 [2024-07-12 11:36:34.593139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:48.398 [2024-07-12 11:36:34.593147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:48.398 [2024-07-12 11:36:34.593155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:48.398 [2024-07-12 11:36:34.593198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.593209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.593217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.593225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.593233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.593241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.398 [2024-07-12 11:36:34.593351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.593984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.593995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.398 [2024-07-12 11:36:34.594782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.398 [2024-07-12 11:36:34.594792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334580 is same with the state(5) to be set 00:30:48.398 [2024-07-12 11:36:34.599882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:48.398 task offset: 30848 on job bdev=Nvme1n1 fails 00:30:48.398 00:30:48.398 Latency(us) 00:30:48.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.398 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme1n1 ended in about 0.91 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme1n1 : 0.91 210.95 13.18 70.32 0.00 225093.23 5185.89 244363.80 00:30:48.398 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme2n1 ended in about 0.92 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme2n1 : 0.92 209.39 13.09 69.80 0.00 222520.93 6012.22 240716.58 00:30:48.398 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme3n1 ended in about 0.94 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme3n1 : 0.94 204.45 12.78 68.15 0.00 223811.45 16070.57 246187.41 00:30:48.398 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme4n1 ended in about 0.95 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme4n1 : 0.95 202.12 12.63 67.37 0.00 222180.62 25302.59 231598.53 00:30:48.398 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme5n1 ended in about 0.95 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme5n1 : 0.95 134.36 8.40 67.18 0.00 291571.39 21199.47 258952.68 00:30:48.398 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme6n1 : 0.93 207.08 12.94 0.00 0.00 276960.24 20173.69 248011.02 00:30:48.398 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme7n1 ended in about 0.93 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme7n1 : 0.93 205.41 12.84 68.47 0.00 205590.65 6753.06 249834.63 00:30:48.398 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme8n1 ended in about 0.96 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme8n1 : 0.96 200.26 12.52 66.75 0.00 207350.43 20287.67 235245.75 00:30:48.398 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme9n1 : 0.93 207.33 12.96 0.00 0.00 259301.73 20287.67 248011.02 00:30:48.398 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:48.398 Job: Nvme10n1 ended in about 0.94 seconds with error 00:30:48.398 Verification LBA range: start 0x0 length 0x400 00:30:48.398 Nvme10n1 : 0.94 135.89 8.49 67.95 0.00 259448.95 21199.47 268070.73 00:30:48.398 =================================================================================================================== 00:30:48.399 Total : 1917.24 119.83 545.99 0.00 235778.78 5185.89 268070.73 00:30:48.399 [2024-07-12 11:36:34.692964] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:48.399 [2024-07-12 11:36:34.693029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:48.399 [2024-07-12 11:36:34.693182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:48.399 [2024-07-12 11:36:34.693524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.693551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330e80 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.693565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330e80 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.693738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.693754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330700 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.693764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330700 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.693830] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:48.399 [2024-07-12 11:36:34.694271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:48.399 [2024-07-12 11:36:34.694583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.694604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000331600 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.694617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331600 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.694635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330e80 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.694652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330700 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.694730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:48.399 [2024-07-12 11:36:34.694746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:48.399 [2024-07-12 11:36:34.694757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:48.399 [2024-07-12 11:36:34.694773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.399 [2024-07-12 11:36:34.694786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:48.399 [2024-07-12 11:36:34.694796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:48.399 [2024-07-12 11:36:34.695044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.695062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.695073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.695085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331600 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.695097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.695106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.695119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:48.399 [2024-07-12 11:36:34.695135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.695144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.695153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:48.399 [2024-07-12 11:36:34.695211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.695223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.695451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.695468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032f080 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.695478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f080 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.695571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.695585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e900 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.695595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.695820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.695834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.695844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032ff80 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.696016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.696030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.696040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.696214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.696228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032f800 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.696237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032f800 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.696389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.399 [2024-07-12 11:36:34.696410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:30:48.399 [2024-07-12 11:36:34.696420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:30:48.399 [2024-07-12 11:36:34.696432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.696444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.696529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f080 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.696543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e900 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.696555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ff80 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.696567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.696579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032f800 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.696592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:30:48.399 [2024-07-12 11:36:34.696602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.696669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:48.399 [2024-07-12 11:36:34.696831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:48.399 [2024-07-12 11:36:34.696840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:48.399 [2024-07-12 11:36:34.696878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.696888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.696897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.696906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.696914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.399 [2024-07-12 11:36:34.696922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.683 11:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:51.683 11:36:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1066645 00:30:52.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1066645) - No such process 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:52.620 rmmod nvme_tcp 00:30:52.620 rmmod nvme_fabrics 00:30:52.620 rmmod nvme_keyring 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.620 11:36:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.158 11:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:55.158 00:30:55.158 real 0m12.214s 00:30:55.158 user 0m36.615s 00:30:55.158 sys 0m1.560s 00:30:55.158 11:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:55.158 11:36:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:55.158 ************************************ 00:30:55.158 END TEST nvmf_shutdown_tc3 00:30:55.158 ************************************ 00:30:55.158 11:36:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:30:55.158 11:36:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:55.158 00:30:55.158 real 0m46.479s 00:30:55.158 user 2m20.189s 00:30:55.158 sys 0m8.875s 00:30:55.158 11:36:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:55.158 11:36:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:55.158 ************************************ 00:30:55.159 END TEST nvmf_shutdown 00:30:55.159 ************************************ 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:55.159 11:36:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.159 11:36:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.159 11:36:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:30:55.159 11:36:41 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.159 11:36:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.159 ************************************ 00:30:55.159 START TEST nvmf_multicontroller 00:30:55.159 ************************************ 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:55.159 * Looking for test storage... 00:30:55.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:55.159 11:36:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.513 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.513 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:31:00.513 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:00.513 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:00.513 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:00.513 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:00.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:00.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:00.514 Found net devices under 0000:86:00.0: cvl_0_0 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:00.514 Found net devices under 0000:86:00.1: cvl_0_1 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:00.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:31:00.514 00:31:00.514 --- 10.0.0.2 ping statistics --- 00:31:00.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.514 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:31:00.514 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:31:00.514 00:31:00.514 --- 10.0.0.1 ping statistics --- 00:31:00.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.515 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1071364 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1071364 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1071364 ']' 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.515 11:36:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.515 [2024-07-12 11:36:46.850623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:00.515 [2024-07-12 11:36:46.850710] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.773 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.773 [2024-07-12 11:36:46.960927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:01.032 [2024-07-12 11:36:47.183177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.032 [2024-07-12 11:36:47.183214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.032 [2024-07-12 11:36:47.183229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.032 [2024-07-12 11:36:47.183238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.032 [2024-07-12 11:36:47.183248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.032 [2024-07-12 11:36:47.183371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.032 [2024-07-12 11:36:47.183442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.032 [2024-07-12 11:36:47.183451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.291 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.291 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:01.291 11:36:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:01.291 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.291 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 [2024-07-12 11:36:47.673764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 Malloc0 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 [2024-07-12 11:36:47.805919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 [2024-07-12 11:36:47.813875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.551 Malloc1 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.551 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.811 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1071611 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1071611 /var/tmp/bdevperf.sock 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1071611 ']' 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.812 11:36:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.749 NVMe0n1 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.749 1 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.749 request: 00:31:02.749 { 00:31:02.749 "name": "NVMe0", 00:31:02.749 "trtype": "tcp", 00:31:02.749 "traddr": "10.0.0.2", 00:31:02.749 "adrfam": "ipv4", 00:31:02.749 "trsvcid": "4420", 00:31:02.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.749 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:02.749 "hostaddr": "10.0.0.2", 00:31:02.749 "hostsvcid": "60000", 00:31:02.749 "prchk_reftag": false, 00:31:02.749 "prchk_guard": false, 00:31:02.749 "hdgst": false, 00:31:02.749 "ddgst": false, 00:31:02.749 "method": "bdev_nvme_attach_controller", 00:31:02.749 "req_id": 1 00:31:02.749 } 00:31:02.749 Got JSON-RPC error response 00:31:02.749 response: 00:31:02.749 { 00:31:02.749 "code": -114, 00:31:02.749 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:02.749 } 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.749 request: 00:31:02.749 { 00:31:02.749 "name": "NVMe0", 00:31:02.749 "trtype": "tcp", 00:31:02.749 "traddr": "10.0.0.2", 00:31:02.749 "adrfam": "ipv4", 00:31:02.749 "trsvcid": "4420", 00:31:02.749 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:02.749 "hostaddr": "10.0.0.2", 00:31:02.749 "hostsvcid": "60000", 00:31:02.749 "prchk_reftag": false, 00:31:02.749 "prchk_guard": false, 00:31:02.749 "hdgst": false, 00:31:02.749 "ddgst": false, 00:31:02.749 "method": "bdev_nvme_attach_controller", 00:31:02.749 "req_id": 1 00:31:02.749 } 00:31:02.749 Got JSON-RPC error response 00:31:02.749 response: 00:31:02.749 { 00:31:02.749 "code": -114, 00:31:02.749 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:02.749 } 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.749 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.750 request: 00:31:02.750 { 00:31:02.750 "name": "NVMe0", 00:31:02.750 "trtype": "tcp", 00:31:02.750 "traddr": "10.0.0.2", 00:31:02.750 "adrfam": "ipv4", 00:31:02.750 "trsvcid": "4420", 00:31:02.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.750 "hostaddr": "10.0.0.2", 00:31:02.750 "hostsvcid": "60000", 00:31:02.750 "prchk_reftag": false, 00:31:02.750 "prchk_guard": false, 00:31:02.750 "hdgst": false, 00:31:02.750 "ddgst": false, 00:31:02.750 "multipath": "disable", 00:31:02.750 "method": "bdev_nvme_attach_controller", 00:31:02.750 "req_id": 1 00:31:02.750 } 00:31:02.750 Got JSON-RPC error response 00:31:02.750 response: 00:31:02.750 { 00:31:02.750 "code": -114, 00:31:02.750 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:31:02.750 } 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.750 11:36:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:02.750 request: 00:31:02.750 { 00:31:02.750 "name": "NVMe0", 00:31:02.750 "trtype": "tcp", 00:31:02.750 "traddr": "10.0.0.2", 00:31:02.750 "adrfam": "ipv4", 00:31:02.750 "trsvcid": "4420", 00:31:02.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.750 "hostaddr": "10.0.0.2", 00:31:02.750 "hostsvcid": "60000", 00:31:02.750 "prchk_reftag": false, 00:31:02.750 "prchk_guard": false, 00:31:02.750 "hdgst": false, 00:31:02.750 "ddgst": false, 00:31:02.750 "multipath": "failover", 00:31:02.750 "method": "bdev_nvme_attach_controller", 00:31:02.750 "req_id": 1 00:31:02.750 } 00:31:02.750 Got JSON-RPC error response 00:31:02.750 response: 00:31:02.750 { 00:31:02.750 "code": -114, 00:31:02.750 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:31:02.750 } 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.750 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.008 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.008 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.266 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:03.266 11:36:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:04.643 0 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1071611 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1071611 ']' 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1071611 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1071611 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1071611' 00:31:04.643 killing process with pid 1071611 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1071611 00:31:04.643 11:36:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1071611 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:31:05.580 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:31:05.580 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:05.580 [2024-07-12 11:36:48.005396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:05.580 [2024-07-12 11:36:48.005516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071611 ] 00:31:05.580 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.580 [2024-07-12 11:36:48.107935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.580 [2024-07-12 11:36:48.340490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.580 [2024-07-12 11:36:49.448334] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name eeca3286-4e81-4126-9ff1-e458fd27f8fa already exists 00:31:05.580 [2024-07-12 11:36:49.448381] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:eeca3286-4e81-4126-9ff1-e458fd27f8fa alias for bdev NVMe1n1 00:31:05.580 [2024-07-12 11:36:49.448395] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:05.580 Running I/O for 1 seconds... 00:31:05.580 00:31:05.580 Latency(us) 00:31:05.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.580 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:05.581 NVMe0n1 : 1.00 21007.80 82.06 0.00 0.00 6084.26 3091.59 14303.94 00:31:05.581 =================================================================================================================== 00:31:05.581 Total : 21007.80 82.06 0.00 0.00 6084.26 3091.59 14303.94 00:31:05.581 Received shutdown signal, test time was about 1.000000 seconds 00:31:05.581 00:31:05.581 Latency(us) 00:31:05.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.581 =================================================================================================================== 00:31:05.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.581 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:05.581 rmmod nvme_tcp 00:31:05.581 rmmod nvme_fabrics 00:31:05.581 rmmod nvme_keyring 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1071364 ']' 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1071364 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1071364 ']' 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1071364 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1071364 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1071364' 00:31:05.581 killing process with pid 1071364 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1071364 00:31:05.581 11:36:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1071364 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.482 11:36:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.385 11:36:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:09.385 00:31:09.385 real 0m14.421s 00:31:09.385 user 0m23.693s 00:31:09.385 sys 0m5.070s 00:31:09.385 11:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.385 11:36:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:09.385 ************************************ 00:31:09.385 END TEST nvmf_multicontroller 00:31:09.385 ************************************ 00:31:09.385 11:36:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:09.385 11:36:55 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:09.385 11:36:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:09.385 11:36:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.385 11:36:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.385 ************************************ 00:31:09.385 START TEST nvmf_aer 00:31:09.385 ************************************ 00:31:09.385 11:36:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:09.643 * Looking for test storage... 00:31:09.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.643 11:36:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:14.914 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:14.914 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:14.914 Found net devices under 0000:86:00.0: cvl_0_0 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:14.914 Found net devices under 0000:86:00.1: cvl_0_1 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:14.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:31:14.914 00:31:14.914 --- 10.0.0.2 ping statistics --- 00:31:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.914 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:31:14.914 00:31:14.914 --- 10.0.0.1 ping statistics --- 00:31:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.914 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:14.914 11:37:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1075837 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1075837 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1075837 ']' 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:14.914 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:14.914 [2024-07-12 11:37:01.103573] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:14.914 [2024-07-12 11:37:01.103662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.914 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.914 [2024-07-12 11:37:01.211492] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.173 [2024-07-12 11:37:01.426984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.173 [2024-07-12 11:37:01.427029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.173 [2024-07-12 11:37:01.427041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.173 [2024-07-12 11:37:01.427050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.173 [2024-07-12 11:37:01.427060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.173 [2024-07-12 11:37:01.427127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.173 [2024-07-12 11:37:01.427200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.173 [2024-07-12 11:37:01.427261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.173 [2024-07-12 11:37:01.427271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.741 [2024-07-12 11:37:01.933646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.741 11:37:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.741 Malloc0 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.741 [2024-07-12 11:37:02.055470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.741 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:15.741 [ 00:31:15.741 { 00:31:15.741 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:15.741 "subtype": "Discovery", 00:31:15.741 "listen_addresses": [], 00:31:15.741 "allow_any_host": true, 00:31:15.741 "hosts": [] 00:31:15.741 }, 00:31:15.741 { 00:31:15.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:15.741 "subtype": "NVMe", 00:31:15.741 "listen_addresses": [ 00:31:15.741 { 00:31:15.741 "trtype": "TCP", 00:31:15.741 "adrfam": "IPv4", 00:31:15.741 "traddr": "10.0.0.2", 00:31:15.741 "trsvcid": "4420" 00:31:15.742 } 00:31:15.742 ], 00:31:15.742 "allow_any_host": true, 00:31:15.742 "hosts": [], 00:31:15.742 "serial_number": "SPDK00000000000001", 00:31:15.742 "model_number": "SPDK bdev Controller", 00:31:15.742 "max_namespaces": 2, 00:31:15.742 "min_cntlid": 1, 00:31:15.742 "max_cntlid": 65519, 00:31:15.742 "namespaces": [ 00:31:15.742 { 00:31:15.742 "nsid": 1, 00:31:15.742 "bdev_name": "Malloc0", 00:31:15.742 "name": "Malloc0", 00:31:15.742 "nguid": "FCF968D0C5FA47148DED25CAC0E52620", 00:31:15.742 "uuid": "fcf968d0-c5fa-4714-8ded-25cac0e52620" 00:31:15.742 } 00:31:15.742 ] 00:31:15.742 } 00:31:15.742 ] 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1076086 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:15.742 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:16.000 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:31:16.000 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.259 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.523 Malloc1 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.523 [ 00:31:16.523 { 00:31:16.523 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:16.523 "subtype": "Discovery", 00:31:16.523 "listen_addresses": [], 00:31:16.523 "allow_any_host": true, 00:31:16.523 "hosts": [] 00:31:16.523 }, 00:31:16.523 { 00:31:16.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.523 "subtype": "NVMe", 00:31:16.523 "listen_addresses": [ 00:31:16.523 { 00:31:16.523 "trtype": "TCP", 00:31:16.523 "adrfam": "IPv4", 00:31:16.523 "traddr": "10.0.0.2", 00:31:16.523 "trsvcid": "4420" 00:31:16.523 } 00:31:16.523 ], 00:31:16.523 "allow_any_host": true, 00:31:16.523 "hosts": [], 00:31:16.523 "serial_number": "SPDK00000000000001", 00:31:16.523 "model_number": "SPDK bdev Controller", 00:31:16.523 "max_namespaces": 2, 00:31:16.523 "min_cntlid": 1, 00:31:16.523 "max_cntlid": 65519, 00:31:16.523 "namespaces": [ 00:31:16.523 { 00:31:16.523 "nsid": 1, 00:31:16.523 "bdev_name": "Malloc0", 00:31:16.523 "name": "Malloc0", 00:31:16.523 "nguid": "FCF968D0C5FA47148DED25CAC0E52620", 00:31:16.523 "uuid": "fcf968d0-c5fa-4714-8ded-25cac0e52620" 00:31:16.523 }, 00:31:16.523 { 00:31:16.523 "nsid": 2, 00:31:16.523 "bdev_name": "Malloc1", 00:31:16.523 "name": "Malloc1", 00:31:16.523 "nguid": "6A9D377EE521466893047065B4CBB179", 00:31:16.523 "uuid": "6a9d377e-e521-4668-9304-7065b4cbb179" 00:31:16.523 } 00:31:16.523 ] 00:31:16.523 } 00:31:16.523 ] 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1076086 00:31:16.523 Asynchronous Event Request test 00:31:16.523 Attaching to 10.0.0.2 00:31:16.523 Attached to 10.0.0.2 00:31:16.523 Registering asynchronous event callbacks... 00:31:16.523 Starting namespace attribute notice tests for all controllers... 00:31:16.523 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:16.523 aer_cb - Changed Namespace 00:31:16.523 Cleaning up... 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.523 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:16.781 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.781 11:37:02 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:16.781 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.781 11:37:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:17.112 rmmod nvme_tcp 00:31:17.112 rmmod nvme_fabrics 00:31:17.112 rmmod nvme_keyring 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1075837 ']' 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1075837 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1075837 ']' 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1075837 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1075837 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1075837' 00:31:17.112 killing process with pid 1075837 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1075837 00:31:17.112 11:37:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1075837 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.515 11:37:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.421 11:37:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.421 00:31:20.421 real 0m10.916s 00:31:20.421 user 0m12.362s 00:31:20.421 sys 0m4.610s 00:31:20.421 11:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:20.421 11:37:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.421 ************************************ 00:31:20.421 END TEST nvmf_aer 00:31:20.421 ************************************ 00:31:20.421 11:37:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:20.421 11:37:06 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:20.421 11:37:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:20.421 11:37:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.421 11:37:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:20.421 ************************************ 00:31:20.421 START TEST nvmf_async_init 00:31:20.421 ************************************ 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:20.421 * Looking for test storage... 00:31:20.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.421 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=02211af5f08d44739b9255fb5b599517 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:31:20.680 11:37:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:25.949 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:25.949 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:25.949 Found net devices under 0000:86:00.0: cvl_0_0 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:25.949 Found net devices under 0000:86:00.1: cvl_0_1 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.949 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:25.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:31:25.950 00:31:25.950 --- 10.0.0.2 ping statistics --- 00:31:25.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.950 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:31:25.950 00:31:25.950 --- 10.0.0.1 ping statistics --- 00:31:25.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.950 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1079824 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1079824 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1079824 ']' 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:25.950 11:37:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:25.950 [2024-07-12 11:37:11.991698] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:25.950 [2024-07-12 11:37:11.991784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.950 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.950 [2024-07-12 11:37:12.099702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.950 [2024-07-12 11:37:12.304807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.950 [2024-07-12 11:37:12.304853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.950 [2024-07-12 11:37:12.304865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.950 [2024-07-12 11:37:12.304876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.950 [2024-07-12 11:37:12.304886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.950 [2024-07-12 11:37:12.304919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.519 [2024-07-12 11:37:12.801941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.519 null0 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 02211af5f08d44739b9255fb5b599517 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.519 [2024-07-12 11:37:12.842162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.519 11:37:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.778 nvme0n1 00:31:26.778 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.778 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:26.778 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.778 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.778 [ 00:31:26.778 { 00:31:26.778 "name": "nvme0n1", 00:31:26.778 "aliases": [ 00:31:26.778 "02211af5-f08d-4473-9b92-55fb5b599517" 00:31:26.778 ], 00:31:26.778 "product_name": "NVMe disk", 00:31:26.778 "block_size": 512, 00:31:26.778 "num_blocks": 2097152, 00:31:26.778 "uuid": "02211af5-f08d-4473-9b92-55fb5b599517", 00:31:26.778 "assigned_rate_limits": { 00:31:26.778 "rw_ios_per_sec": 0, 00:31:26.778 "rw_mbytes_per_sec": 0, 00:31:26.778 "r_mbytes_per_sec": 0, 00:31:26.778 "w_mbytes_per_sec": 0 00:31:26.778 }, 00:31:26.778 "claimed": false, 00:31:26.778 "zoned": false, 00:31:26.778 "supported_io_types": { 00:31:26.778 "read": true, 00:31:26.778 "write": true, 00:31:26.778 "unmap": false, 00:31:26.778 "flush": true, 00:31:26.778 "reset": true, 00:31:26.778 "nvme_admin": true, 00:31:26.778 "nvme_io": true, 00:31:26.778 "nvme_io_md": false, 00:31:26.778 "write_zeroes": true, 00:31:26.778 "zcopy": false, 00:31:26.778 "get_zone_info": false, 00:31:26.778 "zone_management": false, 00:31:26.778 "zone_append": false, 00:31:26.778 "compare": true, 00:31:26.778 "compare_and_write": true, 00:31:26.778 "abort": true, 00:31:26.778 "seek_hole": false, 00:31:26.778 "seek_data": false, 00:31:26.778 "copy": true, 00:31:26.778 "nvme_iov_md": false 00:31:26.778 }, 00:31:26.778 "memory_domains": [ 00:31:26.778 { 00:31:26.778 "dma_device_id": "system", 00:31:26.778 "dma_device_type": 1 00:31:26.778 } 00:31:26.778 ], 00:31:26.778 "driver_specific": { 00:31:26.778 "nvme": [ 00:31:26.778 { 00:31:26.778 "trid": { 00:31:26.779 "trtype": "TCP", 00:31:26.779 "adrfam": "IPv4", 00:31:26.779 "traddr": "10.0.0.2", 00:31:26.779 "trsvcid": "4420", 00:31:26.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:26.779 }, 00:31:26.779 "ctrlr_data": { 00:31:26.779 "cntlid": 1, 00:31:26.779 "vendor_id": "0x8086", 00:31:26.779 "model_number": "SPDK bdev Controller", 00:31:26.779 "serial_number": "00000000000000000000", 00:31:26.779 "firmware_revision": "24.09", 00:31:26.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.779 "oacs": { 00:31:26.779 "security": 0, 00:31:26.779 "format": 0, 00:31:26.779 "firmware": 0, 00:31:26.779 "ns_manage": 0 00:31:26.779 }, 00:31:26.779 "multi_ctrlr": true, 00:31:26.779 "ana_reporting": false 00:31:26.779 }, 00:31:26.779 "vs": { 00:31:26.779 "nvme_version": "1.3" 00:31:26.779 }, 00:31:26.779 "ns_data": { 00:31:26.779 "id": 1, 00:31:26.779 "can_share": true 00:31:26.779 } 00:31:26.779 } 00:31:26.779 ], 00:31:26.779 "mp_policy": "active_passive" 00:31:26.779 } 00:31:26.779 } 00:31:26.779 ] 00:31:26.779 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.779 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:26.779 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.779 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:26.779 [2024-07-12 11:37:13.102068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.779 [2024-07-12 11:37:13.102152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d500 (9): Bad file descriptor 00:31:27.038 [2024-07-12 11:37:13.234512] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.038 [ 00:31:27.038 { 00:31:27.038 "name": "nvme0n1", 00:31:27.038 "aliases": [ 00:31:27.038 "02211af5-f08d-4473-9b92-55fb5b599517" 00:31:27.038 ], 00:31:27.038 "product_name": "NVMe disk", 00:31:27.038 "block_size": 512, 00:31:27.038 "num_blocks": 2097152, 00:31:27.038 "uuid": "02211af5-f08d-4473-9b92-55fb5b599517", 00:31:27.038 "assigned_rate_limits": { 00:31:27.038 "rw_ios_per_sec": 0, 00:31:27.038 "rw_mbytes_per_sec": 0, 00:31:27.038 "r_mbytes_per_sec": 0, 00:31:27.038 "w_mbytes_per_sec": 0 00:31:27.038 }, 00:31:27.038 "claimed": false, 00:31:27.038 "zoned": false, 00:31:27.038 "supported_io_types": { 00:31:27.038 "read": true, 00:31:27.038 "write": true, 00:31:27.038 "unmap": false, 00:31:27.038 "flush": true, 00:31:27.038 "reset": true, 00:31:27.038 "nvme_admin": true, 00:31:27.038 "nvme_io": true, 00:31:27.038 "nvme_io_md": false, 00:31:27.038 "write_zeroes": true, 00:31:27.038 "zcopy": false, 00:31:27.038 "get_zone_info": false, 00:31:27.038 "zone_management": false, 00:31:27.038 "zone_append": false, 00:31:27.038 "compare": true, 00:31:27.038 "compare_and_write": true, 00:31:27.038 "abort": true, 00:31:27.038 "seek_hole": false, 00:31:27.038 "seek_data": false, 00:31:27.038 "copy": true, 00:31:27.038 "nvme_iov_md": false 00:31:27.038 }, 00:31:27.038 "memory_domains": [ 00:31:27.038 { 00:31:27.038 "dma_device_id": "system", 00:31:27.038 "dma_device_type": 1 00:31:27.038 } 00:31:27.038 ], 00:31:27.038 "driver_specific": { 00:31:27.038 "nvme": [ 00:31:27.038 { 00:31:27.038 "trid": { 00:31:27.038 "trtype": "TCP", 00:31:27.038 "adrfam": "IPv4", 00:31:27.038 "traddr": "10.0.0.2", 00:31:27.038 "trsvcid": "4420", 00:31:27.038 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:27.038 }, 00:31:27.038 "ctrlr_data": { 00:31:27.038 "cntlid": 2, 00:31:27.038 "vendor_id": "0x8086", 00:31:27.038 "model_number": "SPDK bdev Controller", 00:31:27.038 "serial_number": "00000000000000000000", 00:31:27.038 "firmware_revision": "24.09", 00:31:27.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.038 "oacs": { 00:31:27.038 "security": 0, 00:31:27.038 "format": 0, 00:31:27.038 "firmware": 0, 00:31:27.038 "ns_manage": 0 00:31:27.038 }, 00:31:27.038 "multi_ctrlr": true, 00:31:27.038 "ana_reporting": false 00:31:27.038 }, 00:31:27.038 "vs": { 00:31:27.038 "nvme_version": "1.3" 00:31:27.038 }, 00:31:27.038 "ns_data": { 00:31:27.038 "id": 1, 00:31:27.038 "can_share": true 00:31:27.038 } 00:31:27.038 } 00:31:27.038 ], 00:31:27.038 "mp_policy": "active_passive" 00:31:27.038 } 00:31:27.038 } 00:31:27.038 ] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KwgTsTbDRR 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KwgTsTbDRR 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.038 [2024-07-12 11:37:13.298723] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:27.038 [2024-07-12 11:37:13.298866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KwgTsTbDRR 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.038 [2024-07-12 11:37:13.306741] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KwgTsTbDRR 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.038 [2024-07-12 11:37:13.314780] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:27.038 [2024-07-12 11:37:13.314847] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:27.038 nvme0n1 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.038 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.297 [ 00:31:27.297 { 00:31:27.297 "name": "nvme0n1", 00:31:27.297 "aliases": [ 00:31:27.297 "02211af5-f08d-4473-9b92-55fb5b599517" 00:31:27.297 ], 00:31:27.297 "product_name": "NVMe disk", 00:31:27.297 "block_size": 512, 00:31:27.297 "num_blocks": 2097152, 00:31:27.297 "uuid": "02211af5-f08d-4473-9b92-55fb5b599517", 00:31:27.297 "assigned_rate_limits": { 00:31:27.297 "rw_ios_per_sec": 0, 00:31:27.297 "rw_mbytes_per_sec": 0, 00:31:27.297 "r_mbytes_per_sec": 0, 00:31:27.297 "w_mbytes_per_sec": 0 00:31:27.297 }, 00:31:27.297 "claimed": false, 00:31:27.297 "zoned": false, 00:31:27.297 "supported_io_types": { 00:31:27.297 "read": true, 00:31:27.297 "write": true, 00:31:27.297 "unmap": false, 00:31:27.297 "flush": true, 00:31:27.297 "reset": true, 00:31:27.297 "nvme_admin": true, 00:31:27.297 "nvme_io": true, 00:31:27.297 "nvme_io_md": false, 00:31:27.297 "write_zeroes": true, 00:31:27.297 "zcopy": false, 00:31:27.297 "get_zone_info": false, 00:31:27.297 "zone_management": false, 00:31:27.297 "zone_append": false, 00:31:27.297 "compare": true, 00:31:27.297 "compare_and_write": true, 00:31:27.297 "abort": true, 00:31:27.297 "seek_hole": false, 00:31:27.297 "seek_data": false, 00:31:27.297 "copy": true, 00:31:27.297 "nvme_iov_md": false 00:31:27.297 }, 00:31:27.297 "memory_domains": [ 00:31:27.297 { 00:31:27.297 "dma_device_id": "system", 00:31:27.297 "dma_device_type": 1 00:31:27.298 } 00:31:27.298 ], 00:31:27.298 "driver_specific": { 00:31:27.298 "nvme": [ 00:31:27.298 { 00:31:27.298 "trid": { 00:31:27.298 "trtype": "TCP", 00:31:27.298 "adrfam": "IPv4", 00:31:27.298 "traddr": "10.0.0.2", 00:31:27.298 "trsvcid": "4421", 00:31:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:27.298 }, 00:31:27.298 "ctrlr_data": { 00:31:27.298 "cntlid": 3, 00:31:27.298 "vendor_id": "0x8086", 00:31:27.298 "model_number": "SPDK bdev Controller", 00:31:27.298 "serial_number": "00000000000000000000", 00:31:27.298 "firmware_revision": "24.09", 00:31:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.298 "oacs": { 00:31:27.298 "security": 0, 00:31:27.298 "format": 0, 00:31:27.298 "firmware": 0, 00:31:27.298 "ns_manage": 0 00:31:27.298 }, 00:31:27.298 "multi_ctrlr": true, 00:31:27.298 "ana_reporting": false 00:31:27.298 }, 00:31:27.298 "vs": { 00:31:27.298 "nvme_version": "1.3" 00:31:27.298 }, 00:31:27.298 "ns_data": { 00:31:27.298 "id": 1, 00:31:27.298 "can_share": true 00:31:27.298 } 00:31:27.298 } 00:31:27.298 ], 00:31:27.298 "mp_policy": "active_passive" 00:31:27.298 } 00:31:27.298 } 00:31:27.298 ] 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.KwgTsTbDRR 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:27.298 rmmod nvme_tcp 00:31:27.298 rmmod nvme_fabrics 00:31:27.298 rmmod nvme_keyring 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1079824 ']' 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1079824 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1079824 ']' 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1079824 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1079824 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1079824' 00:31:27.298 killing process with pid 1079824 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1079824 00:31:27.298 [2024-07-12 11:37:13.529982] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:27.298 [2024-07-12 11:37:13.530014] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:27.298 11:37:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1079824 00:31:28.673 11:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:28.673 11:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:28.674 11:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:28.674 11:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:28.674 11:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:28.674 11:37:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.674 11:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.674 11:37:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.583 11:37:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:30.583 00:31:30.583 real 0m10.171s 00:31:30.583 user 0m4.460s 00:31:30.583 sys 0m4.245s 00:31:30.583 11:37:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:30.583 11:37:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:30.583 ************************************ 00:31:30.583 END TEST nvmf_async_init 00:31:30.583 ************************************ 00:31:30.583 11:37:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:30.583 11:37:16 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:30.583 11:37:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:30.583 11:37:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.583 11:37:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:30.583 ************************************ 00:31:30.583 START TEST dma 00:31:30.583 ************************************ 00:31:30.583 11:37:16 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:30.843 * Looking for test storage... 00:31:30.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:30.843 11:37:16 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.843 11:37:16 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.843 11:37:17 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.843 11:37:17 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.843 11:37:17 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.843 11:37:17 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.843 11:37:17 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.843 11:37:17 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.843 11:37:17 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:31:30.843 11:37:17 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:30.843 11:37:17 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:30.843 11:37:17 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:30.843 11:37:17 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:31:30.843 00:31:30.843 real 0m0.113s 00:31:30.843 user 0m0.054s 00:31:30.843 sys 0m0.066s 00:31:30.843 11:37:17 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:30.843 11:37:17 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:31:30.843 ************************************ 00:31:30.843 END TEST dma 00:31:30.843 ************************************ 00:31:30.843 11:37:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:30.843 11:37:17 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:30.843 11:37:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:30.843 11:37:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.843 11:37:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:30.843 ************************************ 00:31:30.843 START TEST nvmf_identify 00:31:30.843 ************************************ 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:30.843 * Looking for test storage... 00:31:30.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.843 11:37:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:31:31.103 11:37:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:36.380 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:36.381 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:36.381 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:36.381 Found net devices under 0000:86:00.0: cvl_0_0 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:36.381 Found net devices under 0000:86:00.1: cvl_0_1 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:36.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:31:36.381 00:31:36.381 --- 10.0.0.2 ping statistics --- 00:31:36.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.381 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:31:36.381 00:31:36.381 --- 10.0.0.1 ping statistics --- 00:31:36.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.381 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:36.381 11:37:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1083643 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1083643 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1083643 ']' 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:36.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.381 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.381 [2024-07-12 11:37:22.085311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:36.381 [2024-07-12 11:37:22.085405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.381 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.381 [2024-07-12 11:37:22.195256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.381 [2024-07-12 11:37:22.411546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.381 [2024-07-12 11:37:22.411590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.381 [2024-07-12 11:37:22.411603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.381 [2024-07-12 11:37:22.411610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.381 [2024-07-12 11:37:22.411619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.381 [2024-07-12 11:37:22.411857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.381 [2024-07-12 11:37:22.411936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.381 [2024-07-12 11:37:22.412108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.381 [2024-07-12 11:37:22.412117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.641 [2024-07-12 11:37:22.877036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.641 11:37:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.900 Malloc0 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.900 [2024-07-12 11:37:23.027190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.900 [ 00:31:36.900 { 00:31:36.900 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:36.900 "subtype": "Discovery", 00:31:36.900 "listen_addresses": [ 00:31:36.900 { 00:31:36.900 "trtype": "TCP", 00:31:36.900 "adrfam": "IPv4", 00:31:36.900 "traddr": "10.0.0.2", 00:31:36.900 "trsvcid": "4420" 00:31:36.900 } 00:31:36.900 ], 00:31:36.900 "allow_any_host": true, 00:31:36.900 "hosts": [] 00:31:36.900 }, 00:31:36.900 { 00:31:36.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.900 "subtype": "NVMe", 00:31:36.900 "listen_addresses": [ 00:31:36.900 { 00:31:36.900 "trtype": "TCP", 00:31:36.900 "adrfam": "IPv4", 00:31:36.900 "traddr": "10.0.0.2", 00:31:36.900 "trsvcid": "4420" 00:31:36.900 } 00:31:36.900 ], 00:31:36.900 "allow_any_host": true, 00:31:36.900 "hosts": [], 00:31:36.900 "serial_number": "SPDK00000000000001", 00:31:36.900 "model_number": "SPDK bdev Controller", 00:31:36.900 "max_namespaces": 32, 00:31:36.900 "min_cntlid": 1, 00:31:36.900 "max_cntlid": 65519, 00:31:36.900 "namespaces": [ 00:31:36.900 { 00:31:36.900 "nsid": 1, 00:31:36.900 "bdev_name": "Malloc0", 00:31:36.900 "name": "Malloc0", 00:31:36.900 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:36.900 "eui64": "ABCDEF0123456789", 00:31:36.900 "uuid": "668ded7e-5435-4a17-b878-de48680748ee" 00:31:36.900 } 00:31:36.900 ] 00:31:36.900 } 00:31:36.900 ] 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.900 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:36.900 [2024-07-12 11:37:23.096815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:36.900 [2024-07-12 11:37:23.096883] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083747 ] 00:31:36.900 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.900 [2024-07-12 11:37:23.142866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:36.900 [2024-07-12 11:37:23.142973] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:36.900 [2024-07-12 11:37:23.142984] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:36.900 [2024-07-12 11:37:23.143002] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:36.900 [2024-07-12 11:37:23.143017] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:36.900 [2024-07-12 11:37:23.143274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:36.900 [2024-07-12 11:37:23.143315] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:36.900 [2024-07-12 11:37:23.157395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:36.900 [2024-07-12 11:37:23.157422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:36.900 [2024-07-12 11:37:23.157430] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:36.900 [2024-07-12 11:37:23.157438] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:36.900 [2024-07-12 11:37:23.157492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.157502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.157511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.157533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:36.900 [2024-07-12 11:37:23.157558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.165396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.165419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.165425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.165451] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:36.900 [2024-07-12 11:37:23.165469] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:36.900 [2024-07-12 11:37:23.165478] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:36.900 [2024-07-12 11:37:23.165497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.165526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.900 [2024-07-12 11:37:23.165547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.165657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.165667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.165673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.165693] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:36.900 [2024-07-12 11:37:23.165704] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:36.900 [2024-07-12 11:37:23.165714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.165741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.900 [2024-07-12 11:37:23.165756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.165834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.165843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.165852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.165866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:36.900 [2024-07-12 11:37:23.165880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:36.900 [2024-07-12 11:37:23.165890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.165902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.165912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.900 [2024-07-12 11:37:23.165927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.165993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.166001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.166006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.166019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:36.900 [2024-07-12 11:37:23.166031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.166058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.900 [2024-07-12 11:37:23.166072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.166151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.166161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.166166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.166178] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:36.900 [2024-07-12 11:37:23.166188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:36.900 [2024-07-12 11:37:23.166199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:36.900 [2024-07-12 11:37:23.166307] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:36.900 [2024-07-12 11:37:23.166314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:36.900 [2024-07-12 11:37:23.166326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.166350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.900 [2024-07-12 11:37:23.166365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.166452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.166462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.166467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.166481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:36.900 [2024-07-12 11:37:23.166495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.166516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.900 [2024-07-12 11:37:23.166530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.166616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.166624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.166629] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.166641] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:36.900 [2024-07-12 11:37:23.166648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:36.900 [2024-07-12 11:37:23.166662] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:36.900 [2024-07-12 11:37:23.166671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:36.900 [2024-07-12 11:37:23.166692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.900 [2024-07-12 11:37:23.166709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.900 [2024-07-12 11:37:23.166726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.900 [2024-07-12 11:37:23.166858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.900 [2024-07-12 11:37:23.166867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.900 [2024-07-12 11:37:23.166872] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166879] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:36.900 [2024-07-12 11:37:23.166885] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:36.900 [2024-07-12 11:37:23.166892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166905] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166914] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.900 [2024-07-12 11:37:23.166933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.900 [2024-07-12 11:37:23.166937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.166943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.900 [2024-07-12 11:37:23.166959] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:36.900 [2024-07-12 11:37:23.166967] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:36.900 [2024-07-12 11:37:23.166973] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:36.900 [2024-07-12 11:37:23.166981] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:36.900 [2024-07-12 11:37:23.166989] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:36.900 [2024-07-12 11:37:23.166998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:36.900 [2024-07-12 11:37:23.167010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:36.900 [2024-07-12 11:37:23.167021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.167029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.900 [2024-07-12 11:37:23.167035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.167046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:36.901 [2024-07-12 11:37:23.167061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.901 [2024-07-12 11:37:23.167152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.901 [2024-07-12 11:37:23.167160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.901 [2024-07-12 11:37:23.167165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:36.901 [2024-07-12 11:37:23.167179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.167204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.901 [2024-07-12 11:37:23.167215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.167237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.901 [2024-07-12 11:37:23.167245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.167263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.901 [2024-07-12 11:37:23.167270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.167290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.901 [2024-07-12 11:37:23.167297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:36.901 [2024-07-12 11:37:23.167310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:36.901 [2024-07-12 11:37:23.167320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.167336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.901 [2024-07-12 11:37:23.167352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:36.901 [2024-07-12 11:37:23.167359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:36.901 [2024-07-12 11:37:23.167365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:36.901 [2024-07-12 11:37:23.167371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:36.901 [2024-07-12 11:37:23.167384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:36.901 [2024-07-12 11:37:23.167498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.901 [2024-07-12 11:37:23.167509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.901 [2024-07-12 11:37:23.167514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:36.901 [2024-07-12 11:37:23.167527] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:36.901 [2024-07-12 11:37:23.167535] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:36.901 [2024-07-12 11:37:23.167554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.167570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.901 [2024-07-12 11:37:23.167584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:36.901 [2024-07-12 11:37:23.167679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.901 [2024-07-12 11:37:23.167693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.901 [2024-07-12 11:37:23.167698] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167705] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:36.901 [2024-07-12 11:37:23.167711] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:36.901 [2024-07-12 11:37:23.167722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167737] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.167744] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.901 [2024-07-12 11:37:23.208482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.901 [2024-07-12 11:37:23.208487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:36.901 [2024-07-12 11:37:23.208534] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:36.901 [2024-07-12 11:37:23.208574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.208597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.901 [2024-07-12 11:37:23.208608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.208629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.901 [2024-07-12 11:37:23.208648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:36.901 [2024-07-12 11:37:23.208656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:36.901 [2024-07-12 11:37:23.208840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.901 [2024-07-12 11:37:23.208850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.901 [2024-07-12 11:37:23.208858] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208864] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=1024, cccid=4 00:31:36.901 [2024-07-12 11:37:23.208871] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=1024 00:31:36.901 [2024-07-12 11:37:23.208877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208887] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.901 [2024-07-12 11:37:23.208909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.901 [2024-07-12 11:37:23.208914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.208919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:36.901 [2024-07-12 11:37:23.253397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.901 [2024-07-12 11:37:23.253418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.901 [2024-07-12 11:37:23.253423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:36.901 [2024-07-12 11:37:23.253459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.253482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.901 [2024-07-12 11:37:23.253507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:36.901 [2024-07-12 11:37:23.253675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.901 [2024-07-12 11:37:23.253684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.901 [2024-07-12 11:37:23.253689] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253694] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=3072, cccid=4 00:31:36.901 [2024-07-12 11:37:23.253701] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=3072 00:31:36.901 [2024-07-12 11:37:23.253706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253715] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253721] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:36.901 [2024-07-12 11:37:23.253743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:36.901 [2024-07-12 11:37:23.253748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:36.901 [2024-07-12 11:37:23.253768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:36.901 [2024-07-12 11:37:23.253786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.901 [2024-07-12 11:37:23.253806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:36.901 [2024-07-12 11:37:23.253919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:36.901 [2024-07-12 11:37:23.253927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:36.901 [2024-07-12 11:37:23.253932] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253937] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8, cccid=4 00:31:36.901 [2024-07-12 11:37:23.253943] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=8 00:31:36.901 [2024-07-12 11:37:23.253948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253956] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:36.901 [2024-07-12 11:37:23.253961] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.165 [2024-07-12 11:37:23.294476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.165 [2024-07-12 11:37:23.294496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.165 [2024-07-12 11:37:23.294502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.165 [2024-07-12 11:37:23.294508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.165 ===================================================== 00:31:37.165 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:37.165 ===================================================== 00:31:37.165 Controller Capabilities/Features 00:31:37.165 ================================ 00:31:37.165 Vendor ID: 0000 00:31:37.165 Subsystem Vendor ID: 0000 00:31:37.165 Serial Number: .................... 00:31:37.165 Model Number: ........................................ 00:31:37.165 Firmware Version: 24.09 00:31:37.165 Recommended Arb Burst: 0 00:31:37.165 IEEE OUI Identifier: 00 00 00 00:31:37.165 Multi-path I/O 00:31:37.165 May have multiple subsystem ports: No 00:31:37.165 May have multiple controllers: No 00:31:37.165 Associated with SR-IOV VF: No 00:31:37.165 Max Data Transfer Size: 131072 00:31:37.165 Max Number of Namespaces: 0 00:31:37.165 Max Number of I/O Queues: 1024 00:31:37.165 NVMe Specification Version (VS): 1.3 00:31:37.165 NVMe Specification Version (Identify): 1.3 00:31:37.165 Maximum Queue Entries: 128 00:31:37.165 Contiguous Queues Required: Yes 00:31:37.165 Arbitration Mechanisms Supported 00:31:37.165 Weighted Round Robin: Not Supported 00:31:37.165 Vendor Specific: Not Supported 00:31:37.165 Reset Timeout: 15000 ms 00:31:37.165 Doorbell Stride: 4 bytes 00:31:37.165 NVM Subsystem Reset: Not Supported 00:31:37.165 Command Sets Supported 00:31:37.165 NVM Command Set: Supported 00:31:37.166 Boot Partition: Not Supported 00:31:37.166 Memory Page Size Minimum: 4096 bytes 00:31:37.166 Memory Page Size Maximum: 4096 bytes 00:31:37.166 Persistent Memory Region: Not Supported 00:31:37.166 Optional Asynchronous Events Supported 00:31:37.166 Namespace Attribute Notices: Not Supported 00:31:37.166 Firmware Activation Notices: Not Supported 00:31:37.166 ANA Change Notices: Not Supported 00:31:37.166 PLE Aggregate Log Change Notices: Not Supported 00:31:37.166 LBA Status Info Alert Notices: Not Supported 00:31:37.166 EGE Aggregate Log Change Notices: Not Supported 00:31:37.166 Normal NVM Subsystem Shutdown event: Not Supported 00:31:37.166 Zone Descriptor Change Notices: Not Supported 00:31:37.166 Discovery Log Change Notices: Supported 00:31:37.166 Controller Attributes 00:31:37.166 128-bit Host Identifier: Not Supported 00:31:37.166 Non-Operational Permissive Mode: Not Supported 00:31:37.166 NVM Sets: Not Supported 00:31:37.166 Read Recovery Levels: Not Supported 00:31:37.166 Endurance Groups: Not Supported 00:31:37.166 Predictable Latency Mode: Not Supported 00:31:37.166 Traffic Based Keep ALive: Not Supported 00:31:37.166 Namespace Granularity: Not Supported 00:31:37.166 SQ Associations: Not Supported 00:31:37.166 UUID List: Not Supported 00:31:37.166 Multi-Domain Subsystem: Not Supported 00:31:37.166 Fixed Capacity Management: Not Supported 00:31:37.166 Variable Capacity Management: Not Supported 00:31:37.166 Delete Endurance Group: Not Supported 00:31:37.166 Delete NVM Set: Not Supported 00:31:37.166 Extended LBA Formats Supported: Not Supported 00:31:37.166 Flexible Data Placement Supported: Not Supported 00:31:37.166 00:31:37.166 Controller Memory Buffer Support 00:31:37.166 ================================ 00:31:37.166 Supported: No 00:31:37.166 00:31:37.166 Persistent Memory Region Support 00:31:37.166 ================================ 00:31:37.166 Supported: No 00:31:37.166 00:31:37.166 Admin Command Set Attributes 00:31:37.166 ============================ 00:31:37.166 Security Send/Receive: Not Supported 00:31:37.166 Format NVM: Not Supported 00:31:37.166 Firmware Activate/Download: Not Supported 00:31:37.166 Namespace Management: Not Supported 00:31:37.166 Device Self-Test: Not Supported 00:31:37.166 Directives: Not Supported 00:31:37.166 NVMe-MI: Not Supported 00:31:37.166 Virtualization Management: Not Supported 00:31:37.166 Doorbell Buffer Config: Not Supported 00:31:37.166 Get LBA Status Capability: Not Supported 00:31:37.166 Command & Feature Lockdown Capability: Not Supported 00:31:37.166 Abort Command Limit: 1 00:31:37.166 Async Event Request Limit: 4 00:31:37.166 Number of Firmware Slots: N/A 00:31:37.166 Firmware Slot 1 Read-Only: N/A 00:31:37.166 Firmware Activation Without Reset: N/A 00:31:37.166 Multiple Update Detection Support: N/A 00:31:37.166 Firmware Update Granularity: No Information Provided 00:31:37.166 Per-Namespace SMART Log: No 00:31:37.166 Asymmetric Namespace Access Log Page: Not Supported 00:31:37.166 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:37.166 Command Effects Log Page: Not Supported 00:31:37.166 Get Log Page Extended Data: Supported 00:31:37.166 Telemetry Log Pages: Not Supported 00:31:37.166 Persistent Event Log Pages: Not Supported 00:31:37.166 Supported Log Pages Log Page: May Support 00:31:37.166 Commands Supported & Effects Log Page: Not Supported 00:31:37.166 Feature Identifiers & Effects Log Page:May Support 00:31:37.166 NVMe-MI Commands & Effects Log Page: May Support 00:31:37.166 Data Area 4 for Telemetry Log: Not Supported 00:31:37.166 Error Log Page Entries Supported: 128 00:31:37.166 Keep Alive: Not Supported 00:31:37.166 00:31:37.166 NVM Command Set Attributes 00:31:37.166 ========================== 00:31:37.166 Submission Queue Entry Size 00:31:37.166 Max: 1 00:31:37.166 Min: 1 00:31:37.166 Completion Queue Entry Size 00:31:37.166 Max: 1 00:31:37.166 Min: 1 00:31:37.166 Number of Namespaces: 0 00:31:37.166 Compare Command: Not Supported 00:31:37.166 Write Uncorrectable Command: Not Supported 00:31:37.166 Dataset Management Command: Not Supported 00:31:37.166 Write Zeroes Command: Not Supported 00:31:37.166 Set Features Save Field: Not Supported 00:31:37.166 Reservations: Not Supported 00:31:37.166 Timestamp: Not Supported 00:31:37.166 Copy: Not Supported 00:31:37.166 Volatile Write Cache: Not Present 00:31:37.166 Atomic Write Unit (Normal): 1 00:31:37.166 Atomic Write Unit (PFail): 1 00:31:37.166 Atomic Compare & Write Unit: 1 00:31:37.166 Fused Compare & Write: Supported 00:31:37.166 Scatter-Gather List 00:31:37.166 SGL Command Set: Supported 00:31:37.166 SGL Keyed: Supported 00:31:37.166 SGL Bit Bucket Descriptor: Not Supported 00:31:37.166 SGL Metadata Pointer: Not Supported 00:31:37.166 Oversized SGL: Not Supported 00:31:37.166 SGL Metadata Address: Not Supported 00:31:37.166 SGL Offset: Supported 00:31:37.166 Transport SGL Data Block: Not Supported 00:31:37.166 Replay Protected Memory Block: Not Supported 00:31:37.166 00:31:37.166 Firmware Slot Information 00:31:37.166 ========================= 00:31:37.166 Active slot: 0 00:31:37.166 00:31:37.166 00:31:37.166 Error Log 00:31:37.166 ========= 00:31:37.166 00:31:37.166 Active Namespaces 00:31:37.166 ================= 00:31:37.166 Discovery Log Page 00:31:37.166 ================== 00:31:37.166 Generation Counter: 2 00:31:37.166 Number of Records: 2 00:31:37.166 Record Format: 0 00:31:37.166 00:31:37.166 Discovery Log Entry 0 00:31:37.166 ---------------------- 00:31:37.166 Transport Type: 3 (TCP) 00:31:37.166 Address Family: 1 (IPv4) 00:31:37.166 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:37.166 Entry Flags: 00:31:37.166 Duplicate Returned Information: 1 00:31:37.166 Explicit Persistent Connection Support for Discovery: 1 00:31:37.166 Transport Requirements: 00:31:37.166 Secure Channel: Not Required 00:31:37.166 Port ID: 0 (0x0000) 00:31:37.166 Controller ID: 65535 (0xffff) 00:31:37.166 Admin Max SQ Size: 128 00:31:37.166 Transport Service Identifier: 4420 00:31:37.166 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:37.166 Transport Address: 10.0.0.2 00:31:37.166 Discovery Log Entry 1 00:31:37.166 ---------------------- 00:31:37.166 Transport Type: 3 (TCP) 00:31:37.166 Address Family: 1 (IPv4) 00:31:37.166 Subsystem Type: 2 (NVM Subsystem) 00:31:37.166 Entry Flags: 00:31:37.166 Duplicate Returned Information: 0 00:31:37.166 Explicit Persistent Connection Support for Discovery: 0 00:31:37.166 Transport Requirements: 00:31:37.166 Secure Channel: Not Required 00:31:37.166 Port ID: 0 (0x0000) 00:31:37.166 Controller ID: 65535 (0xffff) 00:31:37.166 Admin Max SQ Size: 128 00:31:37.166 Transport Service Identifier: 4420 00:31:37.166 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:37.166 Transport Address: 10.0.0.2 [2024-07-12 11:37:23.294645] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:37.166 [2024-07-12 11:37:23.294660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.166 [2024-07-12 11:37:23.294673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.166 [2024-07-12 11:37:23.294681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:37.166 [2024-07-12 11:37:23.294688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.166 [2024-07-12 11:37:23.294695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:37.166 [2024-07-12 11:37:23.294702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.166 [2024-07-12 11:37:23.294708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.166 [2024-07-12 11:37:23.294715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.166 [2024-07-12 11:37:23.294730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.166 [2024-07-12 11:37:23.294737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.166 [2024-07-12 11:37:23.294744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.166 [2024-07-12 11:37:23.294756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.166 [2024-07-12 11:37:23.294776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.166 [2024-07-12 11:37:23.294860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.166 [2024-07-12 11:37:23.294870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.166 [2024-07-12 11:37:23.294876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.166 [2024-07-12 11:37:23.294882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.166 [2024-07-12 11:37:23.294892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.166 [2024-07-12 11:37:23.294898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.166 [2024-07-12 11:37:23.294907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.166 [2024-07-12 11:37:23.294919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.294938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.295032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.295040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.295045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.295057] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:37.167 [2024-07-12 11:37:23.295065] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:37.167 [2024-07-12 11:37:23.295076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.295098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.295117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.295200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.295210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.295215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.295234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.295254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.295267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.295337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.295345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.295350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.295366] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.295395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.295408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.295493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.295501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.295506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.295523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.295543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.295555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.295623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.295631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.295636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.295653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.295673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.295685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.295753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.295766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.295773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.295790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.295810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.295823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.295890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.295899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.295903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.295920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.295931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.295940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.295952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.296022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.296030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.296035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.296052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296062] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.296074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.296087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.296156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.296165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.296174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.296191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.296211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.296223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.296291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.296300] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.296306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.296323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.296343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.296355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.296429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.296437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.296442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.296459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.167 [2024-07-12 11:37:23.296479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.167 [2024-07-12 11:37:23.296492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.167 [2024-07-12 11:37:23.296572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.167 [2024-07-12 11:37:23.296580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.167 [2024-07-12 11:37:23.296584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.167 [2024-07-12 11:37:23.296602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.167 [2024-07-12 11:37:23.296608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.296612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.296621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.296634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.168 [2024-07-12 11:37:23.296713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.296721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.296725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.296730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.296743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.296749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.296754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.296763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.296776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.168 [2024-07-12 11:37:23.296857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.296868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.296875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.296880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.296893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.296899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.296904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.296912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.296926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.168 [2024-07-12 11:37:23.296994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.297002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.297007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.297024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.297043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.297056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.168 [2024-07-12 11:37:23.297127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.297136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.297140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.297157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.297179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.297192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.168 [2024-07-12 11:37:23.297263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.297272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.297277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.297293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.297304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.297313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.297326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.168 [2024-07-12 11:37:23.301390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.301408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.301416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.301422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.301439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.301446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.301451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.301461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.301479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.168 [2024-07-12 11:37:23.301564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.301572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.301577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.301582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.301592] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:31:37.168 00:31:37.168 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:37.168 [2024-07-12 11:37:23.394464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:37.168 [2024-07-12 11:37:23.394528] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083896 ] 00:31:37.168 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.168 [2024-07-12 11:37:23.437731] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:37.168 [2024-07-12 11:37:23.437829] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:37.168 [2024-07-12 11:37:23.437839] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:37.168 [2024-07-12 11:37:23.437858] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:37.168 [2024-07-12 11:37:23.437870] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:37.168 [2024-07-12 11:37:23.441434] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:37.168 [2024-07-12 11:37:23.441474] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:37.168 [2024-07-12 11:37:23.449395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:37.168 [2024-07-12 11:37:23.449416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:37.168 [2024-07-12 11:37:23.449423] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:37.168 [2024-07-12 11:37:23.449430] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:37.168 [2024-07-12 11:37:23.449475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.449483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.449492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.449511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:37.168 [2024-07-12 11:37:23.449536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.168 [2024-07-12 11:37:23.457396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.457419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.457426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.457433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.457448] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:37.168 [2024-07-12 11:37:23.457459] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:37.168 [2024-07-12 11:37:23.457470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:37.168 [2024-07-12 11:37:23.457486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.457493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.457499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.457511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.168 [2024-07-12 11:37:23.457534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.168 [2024-07-12 11:37:23.457714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.168 [2024-07-12 11:37:23.457724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.168 [2024-07-12 11:37:23.457729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.457736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.168 [2024-07-12 11:37:23.457746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:37.168 [2024-07-12 11:37:23.457756] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:37.168 [2024-07-12 11:37:23.457768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.457774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.168 [2024-07-12 11:37:23.457779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.168 [2024-07-12 11:37:23.457792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.169 [2024-07-12 11:37:23.457809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.169 [2024-07-12 11:37:23.457911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.169 [2024-07-12 11:37:23.457919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.169 [2024-07-12 11:37:23.457924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.457929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.169 [2024-07-12 11:37:23.457937] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:37.169 [2024-07-12 11:37:23.457949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:37.169 [2024-07-12 11:37:23.457960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.457966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.457974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.457984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.169 [2024-07-12 11:37:23.458003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.169 [2024-07-12 11:37:23.458086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.169 [2024-07-12 11:37:23.458096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.169 [2024-07-12 11:37:23.458101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.169 [2024-07-12 11:37:23.458114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:37.169 [2024-07-12 11:37:23.458128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.458149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.169 [2024-07-12 11:37:23.458165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.169 [2024-07-12 11:37:23.458237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.169 [2024-07-12 11:37:23.458246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.169 [2024-07-12 11:37:23.458250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.169 [2024-07-12 11:37:23.458263] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:37.169 [2024-07-12 11:37:23.458270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:37.169 [2024-07-12 11:37:23.458280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:37.169 [2024-07-12 11:37:23.458387] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:37.169 [2024-07-12 11:37:23.458396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:37.169 [2024-07-12 11:37:23.458407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.458429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.169 [2024-07-12 11:37:23.458444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.169 [2024-07-12 11:37:23.458520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.169 [2024-07-12 11:37:23.458529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.169 [2024-07-12 11:37:23.458534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.169 [2024-07-12 11:37:23.458547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:37.169 [2024-07-12 11:37:23.458562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.458584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.169 [2024-07-12 11:37:23.458600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.169 [2024-07-12 11:37:23.458675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.169 [2024-07-12 11:37:23.458684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.169 [2024-07-12 11:37:23.458689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.169 [2024-07-12 11:37:23.458700] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:37.169 [2024-07-12 11:37:23.458708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:37.169 [2024-07-12 11:37:23.458720] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:37.169 [2024-07-12 11:37:23.458730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:37.169 [2024-07-12 11:37:23.458748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.458766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.169 [2024-07-12 11:37:23.458780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.169 [2024-07-12 11:37:23.458903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.169 [2024-07-12 11:37:23.458914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.169 [2024-07-12 11:37:23.458920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:37.169 [2024-07-12 11:37:23.458933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:37.169 [2024-07-12 11:37:23.458939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458954] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458961] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.169 [2024-07-12 11:37:23.458979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.169 [2024-07-12 11:37:23.458984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.458989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.169 [2024-07-12 11:37:23.459006] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:37.169 [2024-07-12 11:37:23.459014] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:37.169 [2024-07-12 11:37:23.459023] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:37.169 [2024-07-12 11:37:23.459033] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:37.169 [2024-07-12 11:37:23.459042] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:37.169 [2024-07-12 11:37:23.459049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:37.169 [2024-07-12 11:37:23.459064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:37.169 [2024-07-12 11:37:23.459078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.459101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:37.169 [2024-07-12 11:37:23.459119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.169 [2024-07-12 11:37:23.459203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.169 [2024-07-12 11:37:23.459211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.169 [2024-07-12 11:37:23.459216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.169 [2024-07-12 11:37:23.459230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.459253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.169 [2024-07-12 11:37:23.459265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.459285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.169 [2024-07-12 11:37:23.459292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.459310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.169 [2024-07-12 11:37:23.459318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.169 [2024-07-12 11:37:23.459328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.169 [2024-07-12 11:37:23.459336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.170 [2024-07-12 11:37:23.459342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.459357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.459365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.459371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:37.170 [2024-07-12 11:37:23.459386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.170 [2024-07-12 11:37:23.459405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:37.170 [2024-07-12 11:37:23.459412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:37.170 [2024-07-12 11:37:23.459418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:37.170 [2024-07-12 11:37:23.459425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.170 [2024-07-12 11:37:23.459431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:37.170 [2024-07-12 11:37:23.459542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.170 [2024-07-12 11:37:23.459551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.170 [2024-07-12 11:37:23.459555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.459561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.170 [2024-07-12 11:37:23.459569] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:37.170 [2024-07-12 11:37:23.459579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.459590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.459601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.459609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.459615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.459621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:37.170 [2024-07-12 11:37:23.459631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:37.170 [2024-07-12 11:37:23.459645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:37.170 [2024-07-12 11:37:23.459722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.170 [2024-07-12 11:37:23.459731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.170 [2024-07-12 11:37:23.459735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.459741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.170 [2024-07-12 11:37:23.459818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.459834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.459856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.459862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:37.170 [2024-07-12 11:37:23.459874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.170 [2024-07-12 11:37:23.459888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:37.170 [2024-07-12 11:37:23.459981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.170 [2024-07-12 11:37:23.459991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.170 [2024-07-12 11:37:23.459996] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460002] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:37.170 [2024-07-12 11:37:23.460008] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:37.170 [2024-07-12 11:37:23.460014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460049] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460057] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.170 [2024-07-12 11:37:23.460107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.170 [2024-07-12 11:37:23.460112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.170 [2024-07-12 11:37:23.460137] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:37.170 [2024-07-12 11:37:23.460154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.460167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.460180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:37.170 [2024-07-12 11:37:23.460197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.170 [2024-07-12 11:37:23.460215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:37.170 [2024-07-12 11:37:23.460316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.170 [2024-07-12 11:37:23.460329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.170 [2024-07-12 11:37:23.460334] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460340] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:37.170 [2024-07-12 11:37:23.460345] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:37.170 [2024-07-12 11:37:23.460351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460360] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460365] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.170 [2024-07-12 11:37:23.460390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.170 [2024-07-12 11:37:23.460399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.170 [2024-07-12 11:37:23.460427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.460443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:37.170 [2024-07-12 11:37:23.460456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:37.170 [2024-07-12 11:37:23.460473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.170 [2024-07-12 11:37:23.460488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:37.170 [2024-07-12 11:37:23.460570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.170 [2024-07-12 11:37:23.460581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.170 [2024-07-12 11:37:23.460585] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.170 [2024-07-12 11:37:23.460591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:37.170 [2024-07-12 11:37:23.460599] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:37.170 [2024-07-12 11:37:23.460604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460620] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460625] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.171 [2024-07-12 11:37:23.460644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.171 [2024-07-12 11:37:23.460649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.171 [2024-07-12 11:37:23.460671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:37.171 [2024-07-12 11:37:23.460682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:37.171 [2024-07-12 11:37:23.460694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:37.171 [2024-07-12 11:37:23.460703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:37.171 [2024-07-12 11:37:23.460711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:37.171 [2024-07-12 11:37:23.460718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:37.171 [2024-07-12 11:37:23.460729] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:37.171 [2024-07-12 11:37:23.460736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:37.171 [2024-07-12 11:37:23.460743] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:37.171 [2024-07-12 11:37:23.460775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.460792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.460801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.460821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.171 [2024-07-12 11:37:23.460837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:37.171 [2024-07-12 11:37:23.460847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:37.171 [2024-07-12 11:37:23.460934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.171 [2024-07-12 11:37:23.460946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.171 [2024-07-12 11:37:23.460951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.171 [2024-07-12 11:37:23.460968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.171 [2024-07-12 11:37:23.460977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.171 [2024-07-12 11:37:23.460985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.460990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:37.171 [2024-07-12 11:37:23.461001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.461007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.461016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.461030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:37.171 [2024-07-12 11:37:23.461108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.171 [2024-07-12 11:37:23.461116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.171 [2024-07-12 11:37:23.461121] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.461126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:37.171 [2024-07-12 11:37:23.461138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.461143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.461152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.461165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:37.171 [2024-07-12 11:37:23.461235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.171 [2024-07-12 11:37:23.461244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.171 [2024-07-12 11:37:23.461251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.461256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:37.171 [2024-07-12 11:37:23.461266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.461272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.461281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.461294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:37.171 [2024-07-12 11:37:23.461368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.171 [2024-07-12 11:37:23.465384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.171 [2024-07-12 11:37:23.465397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:37.171 [2024-07-12 11:37:23.465441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.465459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.465469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.465484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.465494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.465515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.465533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:37.171 [2024-07-12 11:37:23.465548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.171 [2024-07-12 11:37:23.465568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:37.171 [2024-07-12 11:37:23.465575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:37.171 [2024-07-12 11:37:23.465581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:37.171 [2024-07-12 11:37:23.465587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:37.171 [2024-07-12 11:37:23.465837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.171 [2024-07-12 11:37:23.465847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.171 [2024-07-12 11:37:23.465853] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465858] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8192, cccid=5 00:31:37.171 [2024-07-12 11:37:23.465865] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500001db80): expected_datao=0, payload_size=8192 00:31:37.171 [2024-07-12 11:37:23.465871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465923] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465930] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.171 [2024-07-12 11:37:23.465945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.171 [2024-07-12 11:37:23.465950] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465955] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=4 00:31:37.171 [2024-07-12 11:37:23.465961] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:37.171 [2024-07-12 11:37:23.465966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465981] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465986] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.465993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.171 [2024-07-12 11:37:23.466000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.171 [2024-07-12 11:37:23.466004] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.466009] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=6 00:31:37.171 [2024-07-12 11:37:23.466015] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:37.171 [2024-07-12 11:37:23.466020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.466028] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.466032] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.466039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:37.171 [2024-07-12 11:37:23.466048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:37.171 [2024-07-12 11:37:23.466053] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:37.171 [2024-07-12 11:37:23.466062] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=7 00:31:37.172 [2024-07-12 11:37:23.466068] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:37.172 [2024-07-12 11:37:23.466073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.172 [2024-07-12 11:37:23.466081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:37.172 [2024-07-12 11:37:23.466085] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:37.172 [2024-07-12 11:37:23.466095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.172 [2024-07-12 11:37:23.466102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.172 [2024-07-12 11:37:23.466106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.172 [2024-07-12 11:37:23.466112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:37.172 [2024-07-12 11:37:23.466142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.172 [2024-07-12 11:37:23.466150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.172 [2024-07-12 11:37:23.466154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.172 [2024-07-12 11:37:23.466159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:37.172 [2024-07-12 11:37:23.466171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.172 [2024-07-12 11:37:23.466181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.172 [2024-07-12 11:37:23.466190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.172 [2024-07-12 11:37:23.466195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500001db80 00:31:37.172 [2024-07-12 11:37:23.466204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.172 [2024-07-12 11:37:23.466212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.172 [2024-07-12 11:37:23.466218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.172 [2024-07-12 11:37:23.466224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:37.172 ===================================================== 00:31:37.172 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.172 ===================================================== 00:31:37.172 Controller Capabilities/Features 00:31:37.172 ================================ 00:31:37.172 Vendor ID: 8086 00:31:37.172 Subsystem Vendor ID: 8086 00:31:37.172 Serial Number: SPDK00000000000001 00:31:37.172 Model Number: SPDK bdev Controller 00:31:37.172 Firmware Version: 24.09 00:31:37.172 Recommended Arb Burst: 6 00:31:37.172 IEEE OUI Identifier: e4 d2 5c 00:31:37.172 Multi-path I/O 00:31:37.172 May have multiple subsystem ports: Yes 00:31:37.172 May have multiple controllers: Yes 00:31:37.172 Associated with SR-IOV VF: No 00:31:37.172 Max Data Transfer Size: 131072 00:31:37.172 Max Number of Namespaces: 32 00:31:37.172 Max Number of I/O Queues: 127 00:31:37.172 NVMe Specification Version (VS): 1.3 00:31:37.172 NVMe Specification Version (Identify): 1.3 00:31:37.172 Maximum Queue Entries: 128 00:31:37.172 Contiguous Queues Required: Yes 00:31:37.172 Arbitration Mechanisms Supported 00:31:37.172 Weighted Round Robin: Not Supported 00:31:37.172 Vendor Specific: Not Supported 00:31:37.172 Reset Timeout: 15000 ms 00:31:37.172 Doorbell Stride: 4 bytes 00:31:37.172 NVM Subsystem Reset: Not Supported 00:31:37.172 Command Sets Supported 00:31:37.172 NVM Command Set: Supported 00:31:37.172 Boot Partition: Not Supported 00:31:37.172 Memory Page Size Minimum: 4096 bytes 00:31:37.172 Memory Page Size Maximum: 4096 bytes 00:31:37.172 Persistent Memory Region: Not Supported 00:31:37.172 Optional Asynchronous Events Supported 00:31:37.172 Namespace Attribute Notices: Supported 00:31:37.172 Firmware Activation Notices: Not Supported 00:31:37.172 ANA Change Notices: Not Supported 00:31:37.172 PLE Aggregate Log Change Notices: Not Supported 00:31:37.172 LBA Status Info Alert Notices: Not Supported 00:31:37.172 EGE Aggregate Log Change Notices: Not Supported 00:31:37.172 Normal NVM Subsystem Shutdown event: Not Supported 00:31:37.172 Zone Descriptor Change Notices: Not Supported 00:31:37.172 Discovery Log Change Notices: Not Supported 00:31:37.172 Controller Attributes 00:31:37.172 128-bit Host Identifier: Supported 00:31:37.172 Non-Operational Permissive Mode: Not Supported 00:31:37.172 NVM Sets: Not Supported 00:31:37.172 Read Recovery Levels: Not Supported 00:31:37.172 Endurance Groups: Not Supported 00:31:37.172 Predictable Latency Mode: Not Supported 00:31:37.172 Traffic Based Keep ALive: Not Supported 00:31:37.172 Namespace Granularity: Not Supported 00:31:37.172 SQ Associations: Not Supported 00:31:37.172 UUID List: Not Supported 00:31:37.172 Multi-Domain Subsystem: Not Supported 00:31:37.172 Fixed Capacity Management: Not Supported 00:31:37.172 Variable Capacity Management: Not Supported 00:31:37.172 Delete Endurance Group: Not Supported 00:31:37.172 Delete NVM Set: Not Supported 00:31:37.172 Extended LBA Formats Supported: Not Supported 00:31:37.172 Flexible Data Placement Supported: Not Supported 00:31:37.172 00:31:37.172 Controller Memory Buffer Support 00:31:37.172 ================================ 00:31:37.172 Supported: No 00:31:37.172 00:31:37.172 Persistent Memory Region Support 00:31:37.172 ================================ 00:31:37.172 Supported: No 00:31:37.172 00:31:37.172 Admin Command Set Attributes 00:31:37.172 ============================ 00:31:37.172 Security Send/Receive: Not Supported 00:31:37.172 Format NVM: Not Supported 00:31:37.172 Firmware Activate/Download: Not Supported 00:31:37.172 Namespace Management: Not Supported 00:31:37.172 Device Self-Test: Not Supported 00:31:37.172 Directives: Not Supported 00:31:37.172 NVMe-MI: Not Supported 00:31:37.172 Virtualization Management: Not Supported 00:31:37.172 Doorbell Buffer Config: Not Supported 00:31:37.172 Get LBA Status Capability: Not Supported 00:31:37.172 Command & Feature Lockdown Capability: Not Supported 00:31:37.172 Abort Command Limit: 4 00:31:37.172 Async Event Request Limit: 4 00:31:37.172 Number of Firmware Slots: N/A 00:31:37.172 Firmware Slot 1 Read-Only: N/A 00:31:37.172 Firmware Activation Without Reset: N/A 00:31:37.172 Multiple Update Detection Support: N/A 00:31:37.172 Firmware Update Granularity: No Information Provided 00:31:37.172 Per-Namespace SMART Log: No 00:31:37.172 Asymmetric Namespace Access Log Page: Not Supported 00:31:37.172 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:37.172 Command Effects Log Page: Supported 00:31:37.172 Get Log Page Extended Data: Supported 00:31:37.172 Telemetry Log Pages: Not Supported 00:31:37.172 Persistent Event Log Pages: Not Supported 00:31:37.172 Supported Log Pages Log Page: May Support 00:31:37.172 Commands Supported & Effects Log Page: Not Supported 00:31:37.172 Feature Identifiers & Effects Log Page:May Support 00:31:37.172 NVMe-MI Commands & Effects Log Page: May Support 00:31:37.172 Data Area 4 for Telemetry Log: Not Supported 00:31:37.172 Error Log Page Entries Supported: 128 00:31:37.172 Keep Alive: Supported 00:31:37.172 Keep Alive Granularity: 10000 ms 00:31:37.172 00:31:37.172 NVM Command Set Attributes 00:31:37.172 ========================== 00:31:37.172 Submission Queue Entry Size 00:31:37.172 Max: 64 00:31:37.172 Min: 64 00:31:37.172 Completion Queue Entry Size 00:31:37.172 Max: 16 00:31:37.172 Min: 16 00:31:37.172 Number of Namespaces: 32 00:31:37.172 Compare Command: Supported 00:31:37.172 Write Uncorrectable Command: Not Supported 00:31:37.172 Dataset Management Command: Supported 00:31:37.172 Write Zeroes Command: Supported 00:31:37.172 Set Features Save Field: Not Supported 00:31:37.172 Reservations: Supported 00:31:37.172 Timestamp: Not Supported 00:31:37.172 Copy: Supported 00:31:37.172 Volatile Write Cache: Present 00:31:37.172 Atomic Write Unit (Normal): 1 00:31:37.172 Atomic Write Unit (PFail): 1 00:31:37.172 Atomic Compare & Write Unit: 1 00:31:37.172 Fused Compare & Write: Supported 00:31:37.172 Scatter-Gather List 00:31:37.172 SGL Command Set: Supported 00:31:37.172 SGL Keyed: Supported 00:31:37.172 SGL Bit Bucket Descriptor: Not Supported 00:31:37.172 SGL Metadata Pointer: Not Supported 00:31:37.172 Oversized SGL: Not Supported 00:31:37.172 SGL Metadata Address: Not Supported 00:31:37.172 SGL Offset: Supported 00:31:37.172 Transport SGL Data Block: Not Supported 00:31:37.172 Replay Protected Memory Block: Not Supported 00:31:37.172 00:31:37.172 Firmware Slot Information 00:31:37.172 ========================= 00:31:37.172 Active slot: 1 00:31:37.172 Slot 1 Firmware Revision: 24.09 00:31:37.172 00:31:37.172 00:31:37.172 Commands Supported and Effects 00:31:37.172 ============================== 00:31:37.172 Admin Commands 00:31:37.172 -------------- 00:31:37.172 Get Log Page (02h): Supported 00:31:37.172 Identify (06h): Supported 00:31:37.172 Abort (08h): Supported 00:31:37.172 Set Features (09h): Supported 00:31:37.172 Get Features (0Ah): Supported 00:31:37.172 Asynchronous Event Request (0Ch): Supported 00:31:37.172 Keep Alive (18h): Supported 00:31:37.172 I/O Commands 00:31:37.172 ------------ 00:31:37.172 Flush (00h): Supported LBA-Change 00:31:37.172 Write (01h): Supported LBA-Change 00:31:37.172 Read (02h): Supported 00:31:37.172 Compare (05h): Supported 00:31:37.172 Write Zeroes (08h): Supported LBA-Change 00:31:37.172 Dataset Management (09h): Supported LBA-Change 00:31:37.172 Copy (19h): Supported LBA-Change 00:31:37.172 00:31:37.172 Error Log 00:31:37.172 ========= 00:31:37.173 00:31:37.173 Arbitration 00:31:37.173 =========== 00:31:37.173 Arbitration Burst: 1 00:31:37.173 00:31:37.173 Power Management 00:31:37.173 ================ 00:31:37.173 Number of Power States: 1 00:31:37.173 Current Power State: Power State #0 00:31:37.173 Power State #0: 00:31:37.173 Max Power: 0.00 W 00:31:37.173 Non-Operational State: Operational 00:31:37.173 Entry Latency: Not Reported 00:31:37.173 Exit Latency: Not Reported 00:31:37.173 Relative Read Throughput: 0 00:31:37.173 Relative Read Latency: 0 00:31:37.173 Relative Write Throughput: 0 00:31:37.173 Relative Write Latency: 0 00:31:37.173 Idle Power: Not Reported 00:31:37.173 Active Power: Not Reported 00:31:37.173 Non-Operational Permissive Mode: Not Supported 00:31:37.173 00:31:37.173 Health Information 00:31:37.173 ================== 00:31:37.173 Critical Warnings: 00:31:37.173 Available Spare Space: OK 00:31:37.173 Temperature: OK 00:31:37.173 Device Reliability: OK 00:31:37.173 Read Only: No 00:31:37.173 Volatile Memory Backup: OK 00:31:37.173 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:37.173 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:37.173 Available Spare: 0% 00:31:37.173 Available Spare Threshold: 0% 00:31:37.173 Life Percentage Used:[2024-07-12 11:37:23.466371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.466398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.466417] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:37.173 [2024-07-12 11:37:23.466506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.466515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.466520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.466577] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:37.173 [2024-07-12 11:37:23.466591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.466602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.173 [2024-07-12 11:37:23.466609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.466617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.173 [2024-07-12 11:37:23.466623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.466630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.173 [2024-07-12 11:37:23.466638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.466645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.173 [2024-07-12 11:37:23.466656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.466681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.466698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.466778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.466792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.466797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.466813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.466835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.466854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.466940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.466949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.466953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.466965] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:37.173 [2024-07-12 11:37:23.466974] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:37.173 [2024-07-12 11:37:23.466987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.466997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.467012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.467026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.467113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.467121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.467126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.467145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.467164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.467180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.467252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.467261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.467266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.467282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.467308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.467321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.467402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.467417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.467422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.467439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.467459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.467472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.467540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.467548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.467553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.467570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.467590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.467603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.467687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.467695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.467699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.173 [2024-07-12 11:37:23.467718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.173 [2024-07-12 11:37:23.467729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.173 [2024-07-12 11:37:23.467737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.173 [2024-07-12 11:37:23.467753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.173 [2024-07-12 11:37:23.467825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.173 [2024-07-12 11:37:23.467833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.173 [2024-07-12 11:37:23.467838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.467843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.467856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.467862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.467866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.467875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.467887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.467956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.467965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.467969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.467974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.467986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.467994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.467999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.468021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.468092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.468103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.468107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.468124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.468156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.468226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.468235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.468239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.468259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.468306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.468384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.468393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.468397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.468417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.468452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.468523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.468533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.468538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.468556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.468588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.468659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.468668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.468672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.468689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.468723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.468793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.468805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.468810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.468827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.468861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.468934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.468942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.468947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.468964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.468974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.468987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.469000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.469072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.469080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.469085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.469092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.469104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.469110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.469115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.174 [2024-07-12 11:37:23.469123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.174 [2024-07-12 11:37:23.469137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.174 [2024-07-12 11:37:23.469220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.174 [2024-07-12 11:37:23.469228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.174 [2024-07-12 11:37:23.469232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.174 [2024-07-12 11:37:23.469238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.174 [2024-07-12 11:37:23.469250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.175 [2024-07-12 11:37:23.469255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.175 [2024-07-12 11:37:23.469260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.175 [2024-07-12 11:37:23.469271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.175 [2024-07-12 11:37:23.469284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.175 [2024-07-12 11:37:23.469361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.175 [2024-07-12 11:37:23.469371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.175 [2024-07-12 11:37:23.469376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.175 [2024-07-12 11:37:23.473406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.175 [2024-07-12 11:37:23.473425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:37.175 [2024-07-12 11:37:23.473431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:37.175 [2024-07-12 11:37:23.473436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:37.175 [2024-07-12 11:37:23.473446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.175 [2024-07-12 11:37:23.473468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:37.175 [2024-07-12 11:37:23.473648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:37.175 [2024-07-12 11:37:23.473656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:37.175 [2024-07-12 11:37:23.473661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:37.175 [2024-07-12 11:37:23.473666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:37.175 [2024-07-12 11:37:23.473677] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:31:37.175 0% 00:31:37.175 Data Units Read: 0 00:31:37.175 Data Units Written: 0 00:31:37.175 Host Read Commands: 0 00:31:37.175 Host Write Commands: 0 00:31:37.175 Controller Busy Time: 0 minutes 00:31:37.175 Power Cycles: 0 00:31:37.175 Power On Hours: 0 hours 00:31:37.175 Unsafe Shutdowns: 0 00:31:37.175 Unrecoverable Media Errors: 0 00:31:37.175 Lifetime Error Log Entries: 0 00:31:37.175 Warning Temperature Time: 0 minutes 00:31:37.175 Critical Temperature Time: 0 minutes 00:31:37.175 00:31:37.175 Number of Queues 00:31:37.175 ================ 00:31:37.175 Number of I/O Submission Queues: 127 00:31:37.175 Number of I/O Completion Queues: 127 00:31:37.175 00:31:37.175 Active Namespaces 00:31:37.175 ================= 00:31:37.175 Namespace ID:1 00:31:37.175 Error Recovery Timeout: Unlimited 00:31:37.175 Command Set Identifier: NVM (00h) 00:31:37.175 Deallocate: Supported 00:31:37.175 Deallocated/Unwritten Error: Not Supported 00:31:37.175 Deallocated Read Value: Unknown 00:31:37.175 Deallocate in Write Zeroes: Not Supported 00:31:37.175 Deallocated Guard Field: 0xFFFF 00:31:37.175 Flush: Supported 00:31:37.175 Reservation: Supported 00:31:37.175 Namespace Sharing Capabilities: Multiple Controllers 00:31:37.175 Size (in LBAs): 131072 (0GiB) 00:31:37.175 Capacity (in LBAs): 131072 (0GiB) 00:31:37.175 Utilization (in LBAs): 131072 (0GiB) 00:31:37.175 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:37.175 EUI64: ABCDEF0123456789 00:31:37.175 UUID: 668ded7e-5435-4a17-b878-de48680748ee 00:31:37.175 Thin Provisioning: Not Supported 00:31:37.175 Per-NS Atomic Units: Yes 00:31:37.175 Atomic Boundary Size (Normal): 0 00:31:37.175 Atomic Boundary Size (PFail): 0 00:31:37.175 Atomic Boundary Offset: 0 00:31:37.175 Maximum Single Source Range Length: 65535 00:31:37.175 Maximum Copy Length: 65535 00:31:37.175 Maximum Source Range Count: 1 00:31:37.175 NGUID/EUI64 Never Reused: No 00:31:37.175 Namespace Write Protected: No 00:31:37.175 Number of LBA Formats: 1 00:31:37.175 Current LBA Format: LBA Format #00 00:31:37.175 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:37.175 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:37.434 rmmod nvme_tcp 00:31:37.434 rmmod nvme_fabrics 00:31:37.434 rmmod nvme_keyring 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1083643 ']' 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1083643 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1083643 ']' 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1083643 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1083643 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1083643' 00:31:37.434 killing process with pid 1083643 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1083643 00:31:37.434 11:37:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1083643 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.336 11:37:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.241 11:37:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:41.241 00:31:41.241 real 0m10.150s 00:31:41.241 user 0m10.933s 00:31:41.241 sys 0m4.151s 00:31:41.241 11:37:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:41.241 11:37:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:41.241 ************************************ 00:31:41.241 END TEST nvmf_identify 00:31:41.241 ************************************ 00:31:41.241 11:37:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:41.241 11:37:27 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:41.241 11:37:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:41.241 11:37:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.241 11:37:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.241 ************************************ 00:31:41.241 START TEST nvmf_perf 00:31:41.241 ************************************ 00:31:41.241 11:37:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:41.241 * Looking for test storage... 00:31:41.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:41.241 11:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.241 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:41.241 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.241 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.241 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:41.242 11:37:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:46.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:46.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:46.512 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:46.513 Found net devices under 0000:86:00.0: cvl_0_0 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:46.513 Found net devices under 0000:86:00.1: cvl_0_1 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:46.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:31:46.513 00:31:46.513 --- 10.0.0.2 ping statistics --- 00:31:46.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.513 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:31:46.513 00:31:46.513 --- 10.0.0.1 ping statistics --- 00:31:46.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.513 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1087431 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1087431 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1087431 ']' 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:46.513 11:37:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.513 [2024-07-12 11:37:32.722919] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:46.513 [2024-07-12 11:37:32.723009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.513 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.513 [2024-07-12 11:37:32.830526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.782 [2024-07-12 11:37:33.057262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.782 [2024-07-12 11:37:33.057304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.782 [2024-07-12 11:37:33.057316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.782 [2024-07-12 11:37:33.057325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.782 [2024-07-12 11:37:33.057335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.782 [2024-07-12 11:37:33.057433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.782 [2024-07-12 11:37:33.057474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.782 [2024-07-12 11:37:33.057545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.782 [2024-07-12 11:37:33.057555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:47.351 11:37:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:50.659 11:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:50.659 11:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:50.659 11:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:50.659 11:37:36 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.918 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:50.918 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:50.918 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:50.918 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:50.918 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:50.918 [2024-07-12 11:37:37.208838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.918 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:51.176 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:51.177 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:51.435 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:51.435 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:51.695 11:37:37 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.695 [2024-07-12 11:37:37.986703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.695 11:37:38 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:51.955 11:37:38 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:51.955 11:37:38 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:51.955 11:37:38 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:51.955 11:37:38 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:53.400 Initializing NVMe Controllers 00:31:53.400 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:31:53.400 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:53.400 Initialization complete. Launching workers. 00:31:53.400 ======================================================== 00:31:53.400 Latency(us) 00:31:53.400 Device Information : IOPS MiB/s Average min max 00:31:53.400 PCIE (0000:5e:00.0) NSID 1 from core 0: 88407.64 345.34 361.63 48.28 4294.18 00:31:53.400 ======================================================== 00:31:53.400 Total : 88407.64 345.34 361.63 48.28 4294.18 00:31:53.400 00:31:53.400 11:37:39 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:53.400 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.777 Initializing NVMe Controllers 00:31:54.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:54.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:54.777 Initialization complete. Launching workers. 00:31:54.777 ======================================================== 00:31:54.777 Latency(us) 00:31:54.777 Device Information : IOPS MiB/s Average min max 00:31:54.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 142.00 0.55 7307.29 130.59 44968.58 00:31:54.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19470.04 7822.64 47911.19 00:31:54.777 ======================================================== 00:31:54.777 Total : 194.00 0.76 10567.40 130.59 47911.19 00:31:54.777 00:31:54.777 11:37:40 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:54.777 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.153 Initializing NVMe Controllers 00:31:56.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:56.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:56.153 Initialization complete. Launching workers. 00:31:56.153 ======================================================== 00:31:56.153 Latency(us) 00:31:56.153 Device Information : IOPS MiB/s Average min max 00:31:56.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9367.99 36.59 3416.19 419.72 7553.87 00:31:56.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3932.00 15.36 8180.08 6923.80 15718.29 00:31:56.153 ======================================================== 00:31:56.153 Total : 13299.99 51.95 4824.58 419.72 15718.29 00:31:56.153 00:31:56.153 11:37:42 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:56.153 11:37:42 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:56.153 11:37:42 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.153 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.441 Initializing NVMe Controllers 00:31:59.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.441 Controller IO queue size 128, less than required. 00:31:59.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:59.441 Controller IO queue size 128, less than required. 00:31:59.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:59.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:59.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:59.441 Initialization complete. Launching workers. 00:31:59.441 ======================================================== 00:31:59.441 Latency(us) 00:31:59.441 Device Information : IOPS MiB/s Average min max 00:31:59.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1574.77 393.69 83244.55 50306.25 260852.66 00:31:59.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 571.92 142.98 255902.40 122999.57 616798.65 00:31:59.441 ======================================================== 00:31:59.441 Total : 2146.69 536.67 129243.76 50306.25 616798.65 00:31:59.441 00:31:59.441 11:37:45 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:59.441 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.441 No valid NVMe controllers or AIO or URING devices found 00:31:59.441 Initializing NVMe Controllers 00:31:59.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.441 Controller IO queue size 128, less than required. 00:31:59.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:59.441 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:59.441 Controller IO queue size 128, less than required. 00:31:59.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:59.441 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:59.441 WARNING: Some requested NVMe devices were skipped 00:31:59.441 11:37:45 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:59.441 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.727 Initializing NVMe Controllers 00:32:02.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.727 Controller IO queue size 128, less than required. 00:32:02.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.727 Controller IO queue size 128, less than required. 00:32:02.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:02.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:02.727 Initialization complete. Launching workers. 00:32:02.727 00:32:02.727 ==================== 00:32:02.727 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:02.727 TCP transport: 00:32:02.727 polls: 11571 00:32:02.727 idle_polls: 8387 00:32:02.727 sock_completions: 3184 00:32:02.727 nvme_completions: 5607 00:32:02.727 submitted_requests: 8322 00:32:02.727 queued_requests: 1 00:32:02.727 00:32:02.727 ==================== 00:32:02.727 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:02.727 TCP transport: 00:32:02.727 polls: 12293 00:32:02.727 idle_polls: 8427 00:32:02.727 sock_completions: 3866 00:32:02.727 nvme_completions: 6257 00:32:02.727 submitted_requests: 9418 00:32:02.727 queued_requests: 1 00:32:02.727 ======================================================== 00:32:02.727 Latency(us) 00:32:02.727 Device Information : IOPS MiB/s Average min max 00:32:02.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1401.40 350.35 94292.82 56290.87 312509.84 00:32:02.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1563.88 390.97 85079.21 47339.36 433688.84 00:32:02.727 ======================================================== 00:32:02.727 Total : 2965.28 741.32 89433.58 47339.36 433688.84 00:32:02.727 00:32:02.727 11:37:48 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:02.727 11:37:48 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:02.727 11:37:48 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:02.727 11:37:48 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:32:02.727 11:37:48 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:06.009 11:37:51 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc 00:32:06.009 11:37:51 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc 00:32:06.009 11:37:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc 00:32:06.009 11:37:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:06.009 11:37:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:06.009 11:37:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:06.009 11:37:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:06.009 { 00:32:06.009 "uuid": "032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc", 00:32:06.009 "name": "lvs_0", 00:32:06.009 "base_bdev": "Nvme0n1", 00:32:06.009 "total_data_clusters": 238234, 00:32:06.009 "free_clusters": 238234, 00:32:06.009 "block_size": 512, 00:32:06.009 "cluster_size": 4194304 00:32:06.009 } 00:32:06.009 ]' 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc") .free_clusters' 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc") .cluster_size' 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:32:06.009 952936 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:06.009 11:37:52 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc lbd_0 20480 00:32:06.576 11:37:52 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=855aee38-2c76-47d1-959f-ab31de4d2fa1 00:32:06.576 11:37:52 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 855aee38-2c76-47d1-959f-ab31de4d2fa1 lvs_n_0 00:32:07.158 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=e89848cc-b731-4c44-ae90-2175ec41b1f2 00:32:07.158 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb e89848cc-b731-4c44-ae90-2175ec41b1f2 00:32:07.158 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=e89848cc-b731-4c44-ae90-2175ec41b1f2 00:32:07.158 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:07.158 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:07.158 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:07.158 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:07.417 { 00:32:07.417 "uuid": "032f5c6a-67e9-43d6-8299-bb8fe8e3a6cc", 00:32:07.417 "name": "lvs_0", 00:32:07.417 "base_bdev": "Nvme0n1", 00:32:07.417 "total_data_clusters": 238234, 00:32:07.417 "free_clusters": 233114, 00:32:07.417 "block_size": 512, 00:32:07.417 "cluster_size": 4194304 00:32:07.417 }, 00:32:07.417 { 00:32:07.417 "uuid": "e89848cc-b731-4c44-ae90-2175ec41b1f2", 00:32:07.417 "name": "lvs_n_0", 00:32:07.417 "base_bdev": "855aee38-2c76-47d1-959f-ab31de4d2fa1", 00:32:07.417 "total_data_clusters": 5114, 00:32:07.417 "free_clusters": 5114, 00:32:07.417 "block_size": 512, 00:32:07.417 "cluster_size": 4194304 00:32:07.417 } 00:32:07.417 ]' 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e89848cc-b731-4c44-ae90-2175ec41b1f2") .free_clusters' 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e89848cc-b731-4c44-ae90-2175ec41b1f2") .cluster_size' 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:32:07.417 20456 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:07.417 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e89848cc-b731-4c44-ae90-2175ec41b1f2 lbd_nest_0 20456 00:32:07.675 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=98681e5f-8cce-4d0e-9195-c3a6c174e2ac 00:32:07.675 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:07.675 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:07.675 11:37:53 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 98681e5f-8cce-4d0e-9195-c3a6c174e2ac 00:32:07.934 11:37:54 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.192 11:37:54 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:08.192 11:37:54 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:08.192 11:37:54 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:08.192 11:37:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:08.192 11:37:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:08.192 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.396 Initializing NVMe Controllers 00:32:20.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:20.396 Initialization complete. Launching workers. 00:32:20.396 ======================================================== 00:32:20.396 Latency(us) 00:32:20.396 Device Information : IOPS MiB/s Average min max 00:32:20.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.10 0.02 21247.45 152.26 48147.55 00:32:20.396 ======================================================== 00:32:20.396 Total : 47.10 0.02 21247.45 152.26 48147.55 00:32:20.396 00:32:20.396 11:38:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:20.396 11:38:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:20.396 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.375 Initializing NVMe Controllers 00:32:30.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:30.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:30.375 Initialization complete. Launching workers. 00:32:30.375 ======================================================== 00:32:30.375 Latency(us) 00:32:30.375 Device Information : IOPS MiB/s Average min max 00:32:30.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.48 7.94 15763.74 4983.27 48820.95 00:32:30.375 ======================================================== 00:32:30.375 Total : 63.48 7.94 15763.74 4983.27 48820.95 00:32:30.375 00:32:30.375 11:38:15 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:30.375 11:38:15 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:30.375 11:38:15 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:30.375 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.385 Initializing NVMe Controllers 00:32:40.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:40.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:40.385 Initialization complete. Launching workers. 00:32:40.385 ======================================================== 00:32:40.385 Latency(us) 00:32:40.385 Device Information : IOPS MiB/s Average min max 00:32:40.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8180.20 3.99 3911.65 313.24 9261.84 00:32:40.385 ======================================================== 00:32:40.385 Total : 8180.20 3.99 3911.65 313.24 9261.84 00:32:40.385 00:32:40.385 11:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:40.385 11:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:40.385 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.362 Initializing NVMe Controllers 00:32:50.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:50.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:50.362 Initialization complete. Launching workers. 00:32:50.362 ======================================================== 00:32:50.362 Latency(us) 00:32:50.362 Device Information : IOPS MiB/s Average min max 00:32:50.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3875.19 484.40 8261.39 662.69 30412.27 00:32:50.362 ======================================================== 00:32:50.362 Total : 3875.19 484.40 8261.39 662.69 30412.27 00:32:50.362 00:32:50.363 11:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:50.363 11:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:50.363 11:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.363 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.339 Initializing NVMe Controllers 00:33:00.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:00.339 Controller IO queue size 128, less than required. 00:33:00.339 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:00.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:00.339 Initialization complete. Launching workers. 00:33:00.339 ======================================================== 00:33:00.339 Latency(us) 00:33:00.339 Device Information : IOPS MiB/s Average min max 00:33:00.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12663.84 6.18 10107.46 1576.50 22930.18 00:33:00.339 ======================================================== 00:33:00.339 Total : 12663.84 6.18 10107.46 1576.50 22930.18 00:33:00.339 00:33:00.339 11:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:00.339 11:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:00.599 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.805 Initializing NVMe Controllers 00:33:12.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:12.805 Controller IO queue size 128, less than required. 00:33:12.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:12.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:12.805 Initialization complete. Launching workers. 00:33:12.805 ======================================================== 00:33:12.805 Latency(us) 00:33:12.805 Device Information : IOPS MiB/s Average min max 00:33:12.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1203.90 150.49 106767.17 16901.26 198760.44 00:33:12.805 ======================================================== 00:33:12.805 Total : 1203.90 150.49 106767.17 16901.26 198760.44 00:33:12.805 00:33:12.805 11:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:12.805 11:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98681e5f-8cce-4d0e-9195-c3a6c174e2ac 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 855aee38-2c76-47d1-959f-ab31de4d2fa1 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:12.805 rmmod nvme_tcp 00:33:12.805 rmmod nvme_fabrics 00:33:12.805 rmmod nvme_keyring 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1087431 ']' 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1087431 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1087431 ']' 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1087431 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1087431 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1087431' 00:33:12.805 killing process with pid 1087431 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1087431 00:33:12.805 11:38:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1087431 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:15.341 11:39:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.245 11:39:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:17.245 00:33:17.245 real 1m36.139s 00:33:17.245 user 5m46.362s 00:33:17.245 sys 0m15.234s 00:33:17.245 11:39:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:17.245 11:39:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:17.245 ************************************ 00:33:17.245 END TEST nvmf_perf 00:33:17.245 ************************************ 00:33:17.245 11:39:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:17.246 11:39:03 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:17.246 11:39:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:17.246 11:39:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:17.246 11:39:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:17.246 ************************************ 00:33:17.246 START TEST nvmf_fio_host 00:33:17.246 ************************************ 00:33:17.246 11:39:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:17.504 * Looking for test storage... 00:33:17.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:17.505 11:39:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:22.811 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:22.811 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:22.811 Found net devices under 0000:86:00.0: cvl_0_0 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:22.811 Found net devices under 0000:86:00.1: cvl_0_1 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:22.811 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:22.812 11:39:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:22.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:33:22.812 00:33:22.812 --- 10.0.0.2 ping statistics --- 00:33:22.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.812 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:22.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:33:22.812 00:33:22.812 --- 10.0.0.1 ping statistics --- 00:33:22.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.812 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1105391 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1105391 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1105391 ']' 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:22.812 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.071 [2024-07-12 11:39:09.195429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:23.071 [2024-07-12 11:39:09.195531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.071 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.071 [2024-07-12 11:39:09.306574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:23.330 [2024-07-12 11:39:09.534024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.330 [2024-07-12 11:39:09.534069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.330 [2024-07-12 11:39:09.534081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.330 [2024-07-12 11:39:09.534091] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.330 [2024-07-12 11:39:09.534101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.330 [2024-07-12 11:39:09.534173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.330 [2024-07-12 11:39:09.534249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.330 [2024-07-12 11:39:09.534312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.330 [2024-07-12 11:39:09.534322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.898 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:23.898 11:39:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:33:23.898 11:39:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:23.898 [2024-07-12 11:39:10.158539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.898 11:39:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:23.898 11:39:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:23.898 11:39:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.898 11:39:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:24.157 Malloc1 00:33:24.157 11:39:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:24.416 11:39:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:24.675 11:39:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.675 [2024-07-12 11:39:11.005014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:24.957 11:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:25.264 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:25.264 fio-3.35 00:33:25.264 Starting 1 thread 00:33:25.264 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.798 00:33:27.798 test: (groupid=0, jobs=1): err= 0: pid=1106079: Fri Jul 12 11:39:14 2024 00:33:27.798 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(78.4MiB/2006msec) 00:33:27.798 slat (nsec): min=1799, max=275789, avg=2079.82, stdev=2779.11 00:33:27.798 clat (usec): min=3851, max=12371, avg=7010.52, stdev=540.23 00:33:27.798 lat (usec): min=3909, max=12373, avg=7012.60, stdev=540.07 00:33:27.798 clat percentiles (usec): 00:33:27.798 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6587], 00:33:27.798 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:33:27.799 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7832], 00:33:27.799 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[10683], 99.95th=[10945], 00:33:27.799 | 99.99th=[12387] 00:33:27.799 bw ( KiB/s): min=39160, max=40376, per=99.93%, avg=39988.00, stdev=563.13, samples=4 00:33:27.799 iops : min= 9790, max=10094, avg=9997.00, stdev=140.78, samples=4 00:33:27.799 write: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(78.5MiB/2006msec); 0 zone resets 00:33:27.799 slat (nsec): min=1892, max=254569, avg=2179.09, stdev=2070.25 00:33:27.799 clat (usec): min=2990, max=10256, avg=5704.63, stdev=432.42 00:33:27.799 lat (usec): min=3014, max=10258, avg=5706.81, stdev=432.35 00:33:27.799 clat percentiles (usec): 00:33:27.799 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5342], 00:33:27.799 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:33:27.799 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:33:27.799 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 8029], 99.95th=[10028], 00:33:27.799 | 99.99th=[10159] 00:33:27.799 bw ( KiB/s): min=39504, max=40440, per=100.00%, avg=40078.00, stdev=416.33, samples=4 00:33:27.799 iops : min= 9876, max=10110, avg=10019.50, stdev=104.08, samples=4 00:33:27.799 lat (msec) : 4=0.05%, 10=99.86%, 20=0.09% 00:33:27.799 cpu : usr=77.51%, sys=21.25%, ctx=52, majf=0, minf=1534 00:33:27.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:27.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:27.799 issued rwts: total=20069,20091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:27.799 00:33:27.799 Run status group 0 (all jobs): 00:33:27.799 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=78.4MiB (82.2MB), run=2006-2006msec 00:33:27.799 WRITE: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=78.5MiB (82.3MB), run=2006-2006msec 00:33:28.058 ----------------------------------------------------- 00:33:28.058 Suppressions used: 00:33:28.058 count bytes template 00:33:28.058 1 57 /usr/src/fio/parse.c 00:33:28.058 1 8 libtcmalloc_minimal.so 00:33:28.058 ----------------------------------------------------- 00:33:28.058 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:28.058 11:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:28.624 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:28.624 fio-3.35 00:33:28.624 Starting 1 thread 00:33:28.624 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.160 00:33:31.160 test: (groupid=0, jobs=1): err= 0: pid=1106645: Fri Jul 12 11:39:17 2024 00:33:31.160 read: IOPS=9303, BW=145MiB/s (152MB/s)(291MiB/2005msec) 00:33:31.160 slat (usec): min=2, max=111, avg= 3.25, stdev= 1.46 00:33:31.160 clat (usec): min=1671, max=51275, avg=8018.73, stdev=3711.80 00:33:31.160 lat (usec): min=1675, max=51278, avg=8021.98, stdev=3711.81 00:33:31.160 clat percentiles (usec): 00:33:31.160 | 1.00th=[ 4178], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6194], 00:33:31.160 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7701], 60.00th=[ 8225], 00:33:31.160 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10159], 95.00th=[11207], 00:33:31.160 | 99.00th=[12911], 99.50th=[45351], 99.90th=[50594], 99.95th=[51119], 00:33:31.160 | 99.99th=[51119] 00:33:31.160 bw ( KiB/s): min=67456, max=85536, per=49.46%, avg=73624.00, stdev=8218.51, samples=4 00:33:31.160 iops : min= 4216, max= 5346, avg=4601.50, stdev=513.66, samples=4 00:33:31.160 write: IOPS=5443, BW=85.1MiB/s (89.2MB/s)(151MiB/1770msec); 0 zone resets 00:33:31.160 slat (usec): min=28, max=271, avg=32.69, stdev= 5.09 00:33:31.160 clat (usec): min=4039, max=17865, avg=9999.44, stdev=1688.01 00:33:31.160 lat (usec): min=4071, max=17901, avg=10032.13, stdev=1688.08 00:33:31.160 clat percentiles (usec): 00:33:31.160 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8586], 00:33:31.160 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10290], 00:33:31.160 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12256], 95.00th=[13173], 00:33:31.160 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15270], 99.95th=[15533], 00:33:31.160 | 99.99th=[17957] 00:33:31.160 bw ( KiB/s): min=70080, max=88192, per=87.96%, avg=76608.00, stdev=8328.04, samples=4 00:33:31.160 iops : min= 4380, max= 5512, avg=4788.00, stdev=520.50, samples=4 00:33:31.160 lat (msec) : 2=0.02%, 4=0.42%, 10=75.47%, 20=23.64%, 50=0.36% 00:33:31.160 lat (msec) : 100=0.08% 00:33:31.160 cpu : usr=86.78%, sys=12.42%, ctx=41, majf=0, minf=2349 00:33:31.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:33:31.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:31.160 issued rwts: total=18653,9635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:31.160 00:33:31.160 Run status group 0 (all jobs): 00:33:31.160 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=291MiB (306MB), run=2005-2005msec 00:33:31.160 WRITE: bw=85.1MiB/s (89.2MB/s), 85.1MiB/s-85.1MiB/s (89.2MB/s-89.2MB/s), io=151MiB (158MB), run=1770-1770msec 00:33:31.160 ----------------------------------------------------- 00:33:31.160 Suppressions used: 00:33:31.160 count bytes template 00:33:31.160 1 57 /usr/src/fio/parse.c 00:33:31.160 251 24096 /usr/src/fio/iolog.c 00:33:31.160 1 8 libtcmalloc_minimal.so 00:33:31.160 ----------------------------------------------------- 00:33:31.160 00:33:31.160 11:39:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:31.160 11:39:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:31.160 11:39:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:31.160 11:39:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:31.160 11:39:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:31.161 11:39:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:33:31.161 11:39:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:31.161 11:39:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:31.161 11:39:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:31.161 11:39:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:31.161 11:39:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:33:31.161 11:39:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:33:34.449 Nvme0n1 00:33:34.449 11:39:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=51be1dcb-c0d0-4348-8b70-54b3781ecff1 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 51be1dcb-c0d0-4348-8b70-54b3781ecff1 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=51be1dcb-c0d0-4348-8b70-54b3781ecff1 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:37.735 { 00:33:37.735 "uuid": "51be1dcb-c0d0-4348-8b70-54b3781ecff1", 00:33:37.735 "name": "lvs_0", 00:33:37.735 "base_bdev": "Nvme0n1", 00:33:37.735 "total_data_clusters": 930, 00:33:37.735 "free_clusters": 930, 00:33:37.735 "block_size": 512, 00:33:37.735 "cluster_size": 1073741824 00:33:37.735 } 00:33:37.735 ]' 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="51be1dcb-c0d0-4348-8b70-54b3781ecff1") .free_clusters' 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="51be1dcb-c0d0-4348-8b70-54b3781ecff1") .cluster_size' 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:33:37.735 952320 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:33:37.735 d638acc9-416f-44b7-ab3a-f3dac6feb149 00:33:37.735 11:39:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:37.992 11:39:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:38.250 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:38.511 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:38.511 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:38.511 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:38.511 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:38.511 11:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:38.785 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:38.785 fio-3.35 00:33:38.785 Starting 1 thread 00:33:38.785 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.313 00:33:41.313 test: (groupid=0, jobs=1): err= 0: pid=1108384: Fri Jul 12 11:39:27 2024 00:33:41.313 read: IOPS=6821, BW=26.6MiB/s (27.9MB/s)(53.5MiB/2007msec) 00:33:41.313 slat (nsec): min=1842, max=120361, avg=2020.34, stdev=1349.33 00:33:41.313 clat (usec): min=703, max=170728, avg=10233.61, stdev=11054.48 00:33:41.313 lat (usec): min=705, max=170759, avg=10235.63, stdev=11054.70 00:33:41.313 clat percentiles (msec): 00:33:41.313 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:33:41.313 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:33:41.313 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:33:41.313 | 99.00th=[ 12], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:33:41.313 | 99.99th=[ 171] 00:33:41.313 bw ( KiB/s): min=19264, max=30264, per=99.89%, avg=27256.00, stdev=5334.92, samples=4 00:33:41.313 iops : min= 4816, max= 7566, avg=6814.00, stdev=1333.73, samples=4 00:33:41.313 write: IOPS=6826, BW=26.7MiB/s (28.0MB/s)(53.5MiB/2007msec); 0 zone resets 00:33:41.313 slat (nsec): min=1907, max=91464, avg=2110.92, stdev=953.19 00:33:41.313 clat (usec): min=264, max=169016, avg=8385.36, stdev=10331.75 00:33:41.313 lat (usec): min=266, max=169022, avg=8387.47, stdev=10332.00 00:33:41.313 clat percentiles (msec): 00:33:41.313 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:33:41.313 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:33:41.313 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:33:41.313 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:33:41.313 | 99.99th=[ 169] 00:33:41.313 bw ( KiB/s): min=20144, max=29760, per=99.88%, avg=27274.00, stdev=4754.02, samples=4 00:33:41.313 iops : min= 5036, max= 7440, avg=6818.50, stdev=1188.50, samples=4 00:33:41.313 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:33:41.313 lat (msec) : 2=0.02%, 4=0.18%, 10=86.26%, 20=13.05%, 250=0.47% 00:33:41.313 cpu : usr=75.92%, sys=23.08%, ctx=121, majf=0, minf=1530 00:33:41.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:41.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:41.313 issued rwts: total=13691,13701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:41.313 00:33:41.313 Run status group 0 (all jobs): 00:33:41.313 READ: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=53.5MiB (56.1MB), run=2007-2007msec 00:33:41.313 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=53.5MiB (56.1MB), run=2007-2007msec 00:33:41.313 ----------------------------------------------------- 00:33:41.313 Suppressions used: 00:33:41.313 count bytes template 00:33:41.313 1 58 /usr/src/fio/parse.c 00:33:41.313 1 8 libtcmalloc_minimal.so 00:33:41.313 ----------------------------------------------------- 00:33:41.313 00:33:41.313 11:39:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:41.571 11:39:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:42.503 11:39:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=af63a19c-5df7-4b29-990a-46e93ebf5dfe 00:33:42.503 11:39:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb af63a19c-5df7-4b29-990a-46e93ebf5dfe 00:33:42.503 11:39:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=af63a19c-5df7-4b29-990a-46e93ebf5dfe 00:33:42.503 11:39:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:42.503 11:39:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:42.503 11:39:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:42.503 11:39:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:42.760 11:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:42.760 { 00:33:42.760 "uuid": "51be1dcb-c0d0-4348-8b70-54b3781ecff1", 00:33:42.760 "name": "lvs_0", 00:33:42.760 "base_bdev": "Nvme0n1", 00:33:42.760 "total_data_clusters": 930, 00:33:42.760 "free_clusters": 0, 00:33:42.760 "block_size": 512, 00:33:42.760 "cluster_size": 1073741824 00:33:42.760 }, 00:33:42.760 { 00:33:42.760 "uuid": "af63a19c-5df7-4b29-990a-46e93ebf5dfe", 00:33:42.760 "name": "lvs_n_0", 00:33:42.760 "base_bdev": "d638acc9-416f-44b7-ab3a-f3dac6feb149", 00:33:42.760 "total_data_clusters": 237847, 00:33:42.760 "free_clusters": 237847, 00:33:42.760 "block_size": 512, 00:33:42.760 "cluster_size": 4194304 00:33:42.760 } 00:33:42.760 ]' 00:33:42.760 11:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="af63a19c-5df7-4b29-990a-46e93ebf5dfe") .free_clusters' 00:33:42.760 11:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:33:42.760 11:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="af63a19c-5df7-4b29-990a-46e93ebf5dfe") .cluster_size' 00:33:43.017 11:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:43.017 11:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:33:43.017 11:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:33:43.017 951388 00:33:43.017 11:39:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:43.949 662fd84c-76c8-4a14-ab50-fdcd66a4fe3a 00:33:43.949 11:39:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:43.949 11:39:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:44.207 11:39:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:44.478 11:39:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:44.739 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:44.739 fio-3.35 00:33:44.739 Starting 1 thread 00:33:44.739 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.264 00:33:47.264 test: (groupid=0, jobs=1): err= 0: pid=1109421: Fri Jul 12 11:39:33 2024 00:33:47.264 read: IOPS=6624, BW=25.9MiB/s (27.1MB/s)(52.0MiB/2008msec) 00:33:47.264 slat (nsec): min=1798, max=118312, avg=2150.05, stdev=1747.03 00:33:47.264 clat (usec): min=3721, max=17592, avg=10535.02, stdev=960.16 00:33:47.264 lat (usec): min=3725, max=17594, avg=10537.17, stdev=960.02 00:33:47.264 clat percentiles (usec): 00:33:47.264 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:33:47.264 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:33:47.264 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:33:47.264 | 99.00th=[12649], 99.50th=[12911], 99.90th=[15139], 99.95th=[16319], 00:33:47.264 | 99.99th=[17433] 00:33:47.264 bw ( KiB/s): min=24992, max=27296, per=99.86%, avg=26458.00, stdev=1020.69, samples=4 00:33:47.264 iops : min= 6248, max= 6824, avg=6614.50, stdev=255.17, samples=4 00:33:47.264 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(52.0MiB/2008msec); 0 zone resets 00:33:47.264 slat (nsec): min=1884, max=88667, avg=2266.69, stdev=1483.07 00:33:47.264 clat (usec): min=1695, max=15198, avg=8599.67, stdev=784.09 00:33:47.264 lat (usec): min=1701, max=15200, avg=8601.94, stdev=783.97 00:33:47.264 clat percentiles (usec): 00:33:47.264 | 1.00th=[ 6783], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8029], 00:33:47.264 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:33:47.264 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 00:33:47.264 | 99.00th=[10290], 99.50th=[10421], 99.90th=[13698], 99.95th=[13960], 00:33:47.264 | 99.99th=[15139] 00:33:47.264 bw ( KiB/s): min=26200, max=26816, per=99.99%, avg=26516.00, stdev=264.57, samples=4 00:33:47.264 iops : min= 6550, max= 6704, avg=6629.00, stdev=66.14, samples=4 00:33:47.264 lat (msec) : 2=0.01%, 4=0.08%, 10=62.37%, 20=37.54% 00:33:47.264 cpu : usr=73.59%, sys=22.72%, ctx=611, majf=0, minf=1531 00:33:47.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:47.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:47.264 issued rwts: total=13301,13313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:47.264 00:33:47.264 Run status group 0 (all jobs): 00:33:47.264 READ: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=52.0MiB (54.5MB), run=2008-2008msec 00:33:47.264 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=52.0MiB (54.5MB), run=2008-2008msec 00:33:47.264 ----------------------------------------------------- 00:33:47.264 Suppressions used: 00:33:47.264 count bytes template 00:33:47.264 1 58 /usr/src/fio/parse.c 00:33:47.264 1 8 libtcmalloc_minimal.so 00:33:47.264 ----------------------------------------------------- 00:33:47.264 00:33:47.264 11:39:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:47.521 11:39:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:47.521 11:39:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:51.700 11:39:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:51.701 11:39:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:55.057 11:39:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:55.057 11:39:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:56.952 rmmod nvme_tcp 00:33:56.952 rmmod nvme_fabrics 00:33:56.952 rmmod nvme_keyring 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1105391 ']' 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1105391 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1105391 ']' 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1105391 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1105391 00:33:56.952 11:39:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:56.952 11:39:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:56.952 11:39:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1105391' 00:33:56.952 killing process with pid 1105391 00:33:56.952 11:39:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1105391 00:33:56.952 11:39:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1105391 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:58.324 11:39:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.853 11:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:00.853 00:34:00.853 real 0m43.090s 00:34:00.853 user 2m51.019s 00:34:00.853 sys 0m9.536s 00:34:00.853 11:39:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:00.853 11:39:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.853 ************************************ 00:34:00.853 END TEST nvmf_fio_host 00:34:00.853 ************************************ 00:34:00.853 11:39:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:00.853 11:39:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:00.853 11:39:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:00.853 11:39:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:00.853 11:39:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:00.853 ************************************ 00:34:00.853 START TEST nvmf_failover 00:34:00.853 ************************************ 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:00.853 * Looking for test storage... 00:34:00.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:34:00.853 11:39:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:06.118 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:06.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:06.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:06.119 Found net devices under 0000:86:00.0: cvl_0_0 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:06.119 Found net devices under 0000:86:00.1: cvl_0_1 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:06.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:06.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:34:06.119 00:34:06.119 --- 10.0.0.2 ping statistics --- 00:34:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.119 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:06.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:06.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:34:06.119 00:34:06.119 --- 10.0.0.1 ping statistics --- 00:34:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.119 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1114980 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1114980 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1114980 ']' 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:06.119 11:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:06.119 [2024-07-12 11:39:52.067052] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:06.119 [2024-07-12 11:39:52.067141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.119 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.119 [2024-07-12 11:39:52.173728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:06.119 [2024-07-12 11:39:52.384657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.119 [2024-07-12 11:39:52.384695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.119 [2024-07-12 11:39:52.384709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.119 [2024-07-12 11:39:52.384717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.119 [2024-07-12 11:39:52.384726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.119 [2024-07-12 11:39:52.384852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.119 [2024-07-12 11:39:52.384908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.119 [2024-07-12 11:39:52.384919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:06.685 11:39:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.685 11:39:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:06.685 11:39:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:06.685 11:39:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:06.685 11:39:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:06.685 11:39:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.685 11:39:52 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:06.942 [2024-07-12 11:39:53.048060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.942 11:39:53 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:07.199 Malloc0 00:34:07.199 11:39:53 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:07.199 11:39:53 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:07.456 11:39:53 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:07.712 [2024-07-12 11:39:53.858579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.712 11:39:53 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:07.712 [2024-07-12 11:39:54.031095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:07.712 11:39:54 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:07.969 [2024-07-12 11:39:54.203696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1115248 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1115248 /var/tmp/bdevperf.sock 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1115248 ']' 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:07.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.969 11:39:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:08.901 11:39:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:08.901 11:39:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:08.901 11:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:09.158 NVMe0n1 00:34:09.158 11:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:09.416 00:34:09.416 11:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1115509 00:34:09.416 11:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:09.416 11:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:10.789 11:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.789 [2024-07-12 11:39:56.911687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:34:10.789 11:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:14.068 11:39:59 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:14.068 00:34:14.068 11:40:00 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:14.325 11:40:00 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:17.602 11:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.602 [2024-07-12 11:40:03.755994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.602 11:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:18.536 11:40:04 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:18.794 [2024-07-12 11:40:04.951988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:34:18.794 11:40:04 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1115509 00:34:25.350 0 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1115248 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1115248 ']' 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1115248 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1115248 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:25.350 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:25.351 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1115248' 00:34:25.351 killing process with pid 1115248 00:34:25.351 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1115248 00:34:25.351 11:40:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1115248 00:34:26.050 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:26.050 [2024-07-12 11:39:54.292264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:26.050 [2024-07-12 11:39:54.292364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115248 ] 00:34:26.050 EAL: No free 2048 kB hugepages reported on node 1 00:34:26.050 [2024-07-12 11:39:54.397023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.050 [2024-07-12 11:39:54.627888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.050 Running I/O for 15 seconds... 00:34:26.050 [2024-07-12 11:39:56.912687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.912972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.912983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.913001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.913016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.913026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.913037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.913046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.913057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.913067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.913078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.913087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.913098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.913108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.913118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.050 [2024-07-12 11:39:56.913127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.050 [2024-07-12 11:39:56.913139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.051 [2024-07-12 11:39:56.913734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.051 [2024-07-12 11:39:56.913808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.051 [2024-07-12 11:39:56.913819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.913984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.913994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.052 [2024-07-12 11:39:56.914521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.052 [2024-07-12 11:39:56.914532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.914980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.914989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.053 [2024-07-12 11:39:56.915222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.053 [2024-07-12 11:39:56.915233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:39:56.915410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:39:56.915432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(5) to be set 00:34:26.054 [2024-07-12 11:39:56.915456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.054 [2024-07-12 11:39:56.915465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.054 [2024-07-12 11:39:56.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:34:26.054 [2024-07-12 11:39:56.915486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915781] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032dc80 was disconnected and freed. reset controller. 00:34:26.054 [2024-07-12 11:39:56.915796] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:26.054 [2024-07-12 11:39:56.915829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:39:56.915842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:39:56.915862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:39:56.915882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:39:56.915901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:39:56.915916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:26.054 [2024-07-12 11:39:56.919064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:26.054 [2024-07-12 11:39:56.919109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:26.054 [2024-07-12 11:39:56.951411] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:26.054 [2024-07-12 11:40:00.560845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:40:00.560901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.560923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:40:00.560938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.560949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:40:00.560959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.560969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.054 [2024-07-12 11:40:00.560978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.560988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:26.054 [2024-07-12 11:40:00.561908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.054 [2024-07-12 11:40:00.561937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.561958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.561970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.561982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.561992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.562003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.562014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.562026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.562036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.562048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.562058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.562070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.562080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.562092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.562101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.562115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.054 [2024-07-12 11:40:00.562125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.054 [2024-07-12 11:40:00.562137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.055 [2024-07-12 11:40:00.562667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.055 [2024-07-12 11:40:00.562678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.562689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.562710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.562731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.562986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.562998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.563008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.563029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.563049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.056 [2024-07-12 11:40:00.563070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.056 [2024-07-12 11:40:00.563375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.056 [2024-07-12 11:40:00.563390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.057 [2024-07-12 11:40:00.563821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.563984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.563996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.564005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.564016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.564027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.564039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.564049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.564060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.564069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.564079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.564091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.564102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.057 [2024-07-12 11:40:00.564112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.057 [2024-07-12 11:40:00.564122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:00.564639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.058 [2024-07-12 11:40:00.564674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.058 [2024-07-12 11:40:00.564683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105448 len:8 PRP1 0x0 PRP2 0x0 00:34:26.058 [2024-07-12 11:40:00.564693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:00.564936] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032df00 was disconnected and freed. reset controller. 00:34:26.058 [2024-07-12 11:40:00.564950] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:26.058 [2024-07-12 11:40:00.564962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:26.058 [2024-07-12 11:40:00.568085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:26.058 [2024-07-12 11:40:00.568133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:26.058 [2024-07-12 11:40:00.646974] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:26.058 [2024-07-12 11:40:04.952303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:04.952354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:04.952383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:04.952395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:04.952407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.058 [2024-07-12 11:40:04.952418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.058 [2024-07-12 11:40:04.952430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.952987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.952998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.953009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.059 [2024-07-12 11:40:04.953021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.059 [2024-07-12 11:40:04.953030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.060 [2024-07-12 11:40:04.953051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.060 [2024-07-12 11:40:04.953073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:26.060 [2024-07-12 11:40:04.953094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.060 [2024-07-12 11:40:04.953705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.060 [2024-07-12 11:40:04.953716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.953984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.953994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.061 [2024-07-12 11:40:04.954212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.061 [2024-07-12 11:40:04.954222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:26.062 [2024-07-12 11:40:04.954811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.062 [2024-07-12 11:40:04.954864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68632 len:8 PRP1 0x0 PRP2 0x0 00:34:26.062 [2024-07-12 11:40:04.954874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.062 [2024-07-12 11:40:04.954887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.062 [2024-07-12 11:40:04.954896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.954904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68640 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.954915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.954925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.954932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.954946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68648 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.954955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.954965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.954973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.954981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68656 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.954990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.954999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68664 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68672 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68680 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68688 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68696 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68704 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68712 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68720 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67976 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:26.063 [2024-07-12 11:40:04.955303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:26.063 [2024-07-12 11:40:04.955311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:34:26.063 [2024-07-12 11:40:04.955319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955625] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032e680 was disconnected and freed. reset controller. 00:34:26.063 [2024-07-12 11:40:04.955641] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:26.063 [2024-07-12 11:40:04.955672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.063 [2024-07-12 11:40:04.955684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.063 [2024-07-12 11:40:04.955704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.063 [2024-07-12 11:40:04.955726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:26.063 [2024-07-12 11:40:04.955745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:26.063 [2024-07-12 11:40:04.955754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:26.063 [2024-07-12 11:40:04.955797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:26.063 [2024-07-12 11:40:04.958886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:26.063 [2024-07-12 11:40:05.035446] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:26.063 00:34:26.063 Latency(us) 00:34:26.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.063 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:26.063 Verification LBA range: start 0x0 length 0x4000 00:34:26.063 NVMe0n1 : 15.01 9425.18 36.82 502.81 0.00 12867.98 491.52 15956.59 00:34:26.063 =================================================================================================================== 00:34:26.064 Total : 9425.18 36.82 502.81 0.00 12867.98 491.52 15956.59 00:34:26.064 Received shutdown signal, test time was about 15.000000 seconds 00:34:26.064 00:34:26.064 Latency(us) 00:34:26.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.064 =================================================================================================================== 00:34:26.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1118224 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1118224 /var/tmp/bdevperf.sock 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1118224 ']' 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:26.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:26.064 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:26.630 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:26.630 11:40:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:34:26.630 11:40:12 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:26.888 [2024-07-12 11:40:13.073494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:26.888 11:40:13 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:27.144 [2024-07-12 11:40:13.266092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:27.144 11:40:13 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.401 NVMe0n1 00:34:27.401 11:40:13 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.657 00:34:27.657 11:40:13 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:27.915 00:34:27.915 11:40:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:27.915 11:40:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:28.172 11:40:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:28.430 11:40:14 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:31.708 11:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:31.708 11:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:31.708 11:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1119154 00:34:31.708 11:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:31.708 11:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1119154 00:34:32.639 0 00:34:32.639 11:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:32.639 [2024-07-12 11:40:12.132669] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:32.639 [2024-07-12 11:40:12.132764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118224 ] 00:34:32.639 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.639 [2024-07-12 11:40:12.236959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.639 [2024-07-12 11:40:12.472773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.639 [2024-07-12 11:40:14.580901] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:32.639 [2024-07-12 11:40:14.580975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.639 [2024-07-12 11:40:14.580994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.640 [2024-07-12 11:40:14.581008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.640 [2024-07-12 11:40:14.581019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.640 [2024-07-12 11:40:14.581030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.640 [2024-07-12 11:40:14.581040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.640 [2024-07-12 11:40:14.581052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.640 [2024-07-12 11:40:14.581062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.640 [2024-07-12 11:40:14.581074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.640 [2024-07-12 11:40:14.581125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.640 [2024-07-12 11:40:14.581151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:32.640 [2024-07-12 11:40:14.672597] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:32.640 Running I/O for 1 seconds... 00:34:32.640 00:34:32.640 Latency(us) 00:34:32.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.640 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:32.640 Verification LBA range: start 0x0 length 0x4000 00:34:32.640 NVMe0n1 : 1.01 9738.24 38.04 0.00 0.00 13077.21 2664.18 14588.88 00:34:32.640 =================================================================================================================== 00:34:32.640 Total : 9738.24 38.04 0.00 0.00 13077.21 2664.18 14588.88 00:34:32.640 11:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:32.640 11:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:32.896 11:40:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:33.154 11:40:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:33.154 11:40:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:33.154 11:40:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:33.411 11:40:19 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1118224 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1118224 ']' 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1118224 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118224 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118224' 00:34:36.688 killing process with pid 1118224 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1118224 00:34:36.688 11:40:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1118224 00:34:38.062 11:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:38.062 11:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:38.062 rmmod nvme_tcp 00:34:38.062 rmmod nvme_fabrics 00:34:38.062 rmmod nvme_keyring 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1114980 ']' 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1114980 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1114980 ']' 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1114980 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1114980 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1114980' 00:34:38.062 killing process with pid 1114980 00:34:38.062 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1114980 00:34:38.063 11:40:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1114980 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:39.440 11:40:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.976 11:40:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:41.976 00:34:41.976 real 0m41.152s 00:34:41.976 user 2m13.153s 00:34:41.976 sys 0m7.383s 00:34:41.976 11:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:41.976 11:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:41.976 ************************************ 00:34:41.976 END TEST nvmf_failover 00:34:41.976 ************************************ 00:34:41.976 11:40:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:41.976 11:40:27 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:41.976 11:40:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:41.976 11:40:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.976 11:40:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:41.976 ************************************ 00:34:41.976 START TEST nvmf_host_discovery 00:34:41.976 ************************************ 00:34:41.976 11:40:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:41.976 * Looking for test storage... 00:34:41.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:41.976 11:40:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.976 11:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:34:41.977 11:40:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:47.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:47.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.244 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:47.245 Found net devices under 0000:86:00.0: cvl_0_0 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:47.245 Found net devices under 0000:86:00.1: cvl_0_1 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.245 11:40:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:47.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:34:47.245 00:34:47.245 --- 10.0.0.2 ping statistics --- 00:34:47.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.245 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:34:47.245 00:34:47.245 --- 10.0.0.1 ping statistics --- 00:34:47.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.245 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1123613 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1123613 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1123613 ']' 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:47.245 11:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.245 [2024-07-12 11:40:33.243809] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:47.245 [2024-07-12 11:40:33.243908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.245 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.245 [2024-07-12 11:40:33.353660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.245 [2024-07-12 11:40:33.570480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.245 [2024-07-12 11:40:33.570529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.245 [2024-07-12 11:40:33.570542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.245 [2024-07-12 11:40:33.570553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.245 [2024-07-12 11:40:33.570563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.245 [2024-07-12 11:40:33.570598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 [2024-07-12 11:40:34.053582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 [2024-07-12 11:40:34.065775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 null0 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 null1 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1123846 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1123846 /tmp/host.sock 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1123846 ']' 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:47.811 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:47.811 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.811 [2024-07-12 11:40:34.165629] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:47.812 [2024-07-12 11:40:34.165710] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123846 ] 00:34:48.069 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.069 [2024-07-12 11:40:34.267784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.326 [2024-07-12 11:40:34.494917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.892 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:48.893 11:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.893 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.152 [2024-07-12 11:40:35.305102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:34:49.152 11:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:49.718 [2024-07-12 11:40:35.994854] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:49.719 [2024-07-12 11:40:35.994885] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:49.719 [2024-07-12 11:40:35.994921] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:49.977 [2024-07-12 11:40:36.081196] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:49.977 [2024-07-12 11:40:36.267326] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:49.977 [2024-07-12 11:40:36.267355] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:50.236 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:34:50.495 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.496 [2024-07-12 11:40:36.790597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:50.496 [2024-07-12 11:40:36.790761] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:50.496 [2024-07-12 11:40:36.790804] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:50.496 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.755 [2024-07-12 11:40:36.917669] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:50.755 11:40:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:34:51.013 [2024-07-12 11:40:37.141862] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:51.013 [2024-07-12 11:40:37.141888] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:51.013 [2024-07-12 11:40:37.141900] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:51.610 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.869 11:40:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.869 [2024-07-12 11:40:38.046857] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:51.869 [2024-07-12 11:40:38.046893] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:51.869 [2024-07-12 11:40:38.049109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.869 [2024-07-12 11:40:38.049141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.869 [2024-07-12 11:40:38.049159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.869 [2024-07-12 11:40:38.049169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.869 [2024-07-12 11:40:38.049179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.869 [2024-07-12 11:40:38.049189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.869 [2024-07-12 11:40:38.049199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.869 [2024-07-12 11:40:38.049209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.869 [2024-07-12 11:40:38.049218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:51.869 [2024-07-12 11:40:38.059118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.869 [2024-07-12 11:40:38.069156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:51.869 [2024-07-12 11:40:38.069472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.869 [2024-07-12 11:40:38.069497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:51.869 [2024-07-12 11:40:38.069510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.869 [2024-07-12 11:40:38.069527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.869 [2024-07-12 11:40:38.069544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:51.869 [2024-07-12 11:40:38.069554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:51.869 [2024-07-12 11:40:38.069565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:51.869 [2024-07-12 11:40:38.069589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.869 [2024-07-12 11:40:38.079240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:51.869 [2024-07-12 11:40:38.079511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.869 [2024-07-12 11:40:38.079537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:51.869 [2024-07-12 11:40:38.079548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.869 [2024-07-12 11:40:38.079565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.869 [2024-07-12 11:40:38.079579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:51.869 [2024-07-12 11:40:38.079589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:51.869 [2024-07-12 11:40:38.079599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:51.869 [2024-07-12 11:40:38.079613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.869 [2024-07-12 11:40:38.089319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:51.869 [2024-07-12 11:40:38.089519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.869 [2024-07-12 11:40:38.089539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:51.869 [2024-07-12 11:40:38.089550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.869 [2024-07-12 11:40:38.089566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.869 [2024-07-12 11:40:38.089580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:51.869 [2024-07-12 11:40:38.089590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:51.869 [2024-07-12 11:40:38.089600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:51.869 [2024-07-12 11:40:38.089614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.869 [2024-07-12 11:40:38.099399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:51.869 [2024-07-12 11:40:38.099653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.869 [2024-07-12 11:40:38.099673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:51.869 [2024-07-12 11:40:38.099683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.869 [2024-07-12 11:40:38.099699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.869 [2024-07-12 11:40:38.099713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:51.869 [2024-07-12 11:40:38.099722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:51.869 [2024-07-12 11:40:38.099731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:51.869 [2024-07-12 11:40:38.099744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.869 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:51.870 [2024-07-12 11:40:38.109472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:51.870 [2024-07-12 11:40:38.109759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.870 [2024-07-12 11:40:38.109780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:51.870 [2024-07-12 11:40:38.109791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.870 [2024-07-12 11:40:38.109807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.870 [2024-07-12 11:40:38.109821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:51.870 [2024-07-12 11:40:38.109831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:51.870 [2024-07-12 11:40:38.109842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:51.870 [2024-07-12 11:40:38.109857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.870 [2024-07-12 11:40:38.119548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:51.870 [2024-07-12 11:40:38.119697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.870 [2024-07-12 11:40:38.119716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:51.870 [2024-07-12 11:40:38.119727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.870 [2024-07-12 11:40:38.119742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.870 [2024-07-12 11:40:38.119756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:51.870 [2024-07-12 11:40:38.119766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:51.870 [2024-07-12 11:40:38.119776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:51.870 [2024-07-12 11:40:38.119791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.870 [2024-07-12 11:40:38.129621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:51.870 [2024-07-12 11:40:38.129819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.870 [2024-07-12 11:40:38.129838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:34:51.870 [2024-07-12 11:40:38.129849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d000 is same with the state(5) to be set 00:34:51.870 [2024-07-12 11:40:38.129864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:34:51.870 [2024-07-12 11:40:38.129877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:51.870 [2024-07-12 11:40:38.129890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:51.870 [2024-07-12 11:40:38.129899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:51.870 [2024-07-12 11:40:38.129914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.870 [2024-07-12 11:40:38.133292] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:51.870 [2024-07-12 11:40:38.133323] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:51.870 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:34:52.128 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:34:52.129 11:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:52.129 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.129 11:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.502 [2024-07-12 11:40:39.474984] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:53.502 [2024-07-12 11:40:39.475009] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:53.502 [2024-07-12 11:40:39.475037] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:53.502 [2024-07-12 11:40:39.602468] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:53.502 [2024-07-12 11:40:39.710226] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:53.502 [2024-07-12 11:40:39.710266] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.502 request: 00:34:53.502 { 00:34:53.502 "name": "nvme", 00:34:53.502 "trtype": "tcp", 00:34:53.502 "traddr": "10.0.0.2", 00:34:53.502 "adrfam": "ipv4", 00:34:53.502 "trsvcid": "8009", 00:34:53.502 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:53.502 "wait_for_attach": true, 00:34:53.502 "method": "bdev_nvme_start_discovery", 00:34:53.502 "req_id": 1 00:34:53.502 } 00:34:53.502 Got JSON-RPC error response 00:34:53.502 response: 00:34:53.502 { 00:34:53.502 "code": -17, 00:34:53.502 "message": "File exists" 00:34:53.502 } 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.502 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.503 request: 00:34:53.503 { 00:34:53.503 "name": "nvme_second", 00:34:53.503 "trtype": "tcp", 00:34:53.503 "traddr": "10.0.0.2", 00:34:53.503 "adrfam": "ipv4", 00:34:53.503 "trsvcid": "8009", 00:34:53.503 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:53.503 "wait_for_attach": true, 00:34:53.503 "method": "bdev_nvme_start_discovery", 00:34:53.503 "req_id": 1 00:34:53.503 } 00:34:53.503 Got JSON-RPC error response 00:34:53.503 response: 00:34:53.503 { 00:34:53.503 "code": -17, 00:34:53.503 "message": "File exists" 00:34:53.503 } 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.503 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.760 11:40:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:54.693 [2024-07-12 11:40:40.933819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:54.693 [2024-07-12 11:40:40.933868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=10.0.0.2, port=8010 00:34:54.693 [2024-07-12 11:40:40.933926] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:54.693 [2024-07-12 11:40:40.933937] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:54.693 [2024-07-12 11:40:40.933949] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:55.626 [2024-07-12 11:40:41.936190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.626 [2024-07-12 11:40:41.936224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e400 with addr=10.0.0.2, port=8010 00:34:55.626 [2024-07-12 11:40:41.936274] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:55.626 [2024-07-12 11:40:41.936286] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:55.626 [2024-07-12 11:40:41.936299] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:57.001 [2024-07-12 11:40:42.938391] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:57.001 request: 00:34:57.001 { 00:34:57.001 "name": "nvme_second", 00:34:57.001 "trtype": "tcp", 00:34:57.001 "traddr": "10.0.0.2", 00:34:57.001 "adrfam": "ipv4", 00:34:57.001 "trsvcid": "8010", 00:34:57.001 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:57.001 "wait_for_attach": false, 00:34:57.001 "attach_timeout_ms": 3000, 00:34:57.001 "method": "bdev_nvme_start_discovery", 00:34:57.001 "req_id": 1 00:34:57.001 } 00:34:57.001 Got JSON-RPC error response 00:34:57.001 response: 00:34:57.001 { 00:34:57.001 "code": -110, 00:34:57.001 "message": "Connection timed out" 00:34:57.001 } 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1123846 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:57.001 11:40:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:57.001 rmmod nvme_tcp 00:34:57.001 rmmod nvme_fabrics 00:34:57.001 rmmod nvme_keyring 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1123613 ']' 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1123613 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1123613 ']' 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1123613 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123613 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123613' 00:34:57.001 killing process with pid 1123613 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1123613 00:34:57.001 11:40:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1123613 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:58.379 11:40:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.281 11:40:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:00.281 00:35:00.281 real 0m18.568s 00:35:00.281 user 0m23.944s 00:35:00.281 sys 0m5.410s 00:35:00.281 11:40:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:00.281 11:40:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:00.281 ************************************ 00:35:00.281 END TEST nvmf_host_discovery 00:35:00.281 ************************************ 00:35:00.281 11:40:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:00.281 11:40:46 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:00.281 11:40:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:00.281 11:40:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:00.281 11:40:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.281 ************************************ 00:35:00.281 START TEST nvmf_host_multipath_status 00:35:00.281 ************************************ 00:35:00.281 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:00.281 * Looking for test storage... 00:35:00.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.282 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.539 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:35:00.540 11:40:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:05.811 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:05.811 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.811 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:05.811 Found net devices under 0000:86:00.0: cvl_0_0 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:05.812 Found net devices under 0000:86:00.1: cvl_0_1 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:05.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:35:05.812 00:35:05.812 --- 10.0.0.2 ping statistics --- 00:35:05.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.812 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:35:05.812 00:35:05.812 --- 10.0.0.1 ping statistics --- 00:35:05.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.812 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1128962 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1128962 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1128962 ']' 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:05.812 11:40:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:05.812 [2024-07-12 11:40:52.031907] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:05.812 [2024-07-12 11:40:52.031998] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.812 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.812 [2024-07-12 11:40:52.141858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:06.071 [2024-07-12 11:40:52.355625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.071 [2024-07-12 11:40:52.355671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.071 [2024-07-12 11:40:52.355685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.071 [2024-07-12 11:40:52.355693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.071 [2024-07-12 11:40:52.355703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.071 [2024-07-12 11:40:52.355809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.071 [2024-07-12 11:40:52.355822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1128962 00:35:06.639 11:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:06.897 [2024-07-12 11:40:53.013180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.897 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:07.156 Malloc0 00:35:07.156 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:07.156 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:07.414 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:07.674 [2024-07-12 11:40:53.792203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:07.674 [2024-07-12 11:40:53.964726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1129401 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1129401 /var/tmp/bdevperf.sock 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1129401 ']' 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:07.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:07.674 11:40:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:08.607 11:40:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:08.607 11:40:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:35:08.607 11:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:08.864 11:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:35:09.430 Nvme0n1 00:35:09.430 11:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:09.688 Nvme0n1 00:35:09.688 11:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:09.688 11:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:11.590 11:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:11.590 11:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:11.850 11:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:12.108 11:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:13.047 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:13.047 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.047 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.047 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.307 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:13.566 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.566 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:13.566 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.566 11:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:13.825 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.825 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:13.825 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.825 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:14.084 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.084 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:14.084 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.084 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:14.084 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.084 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:14.084 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:14.343 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:14.602 11:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:15.541 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:15.541 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:15.541 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:15.541 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.800 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.800 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:15.800 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.800 11:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:16.059 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.318 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.318 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:16.318 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.318 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:16.613 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.614 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:16.614 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.614 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:16.614 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.614 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:16.614 11:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:16.882 11:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:17.140 11:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:18.077 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:18.077 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:18.077 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.077 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.336 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:18.595 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.595 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:18.595 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.595 11:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:18.854 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:18.854 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:18.854 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.854 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:19.119 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.119 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:19.119 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.119 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:19.119 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.119 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:19.120 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:19.385 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:19.643 11:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:20.578 11:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:20.578 11:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:20.578 11:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.578 11:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:20.837 11:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.837 11:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:20.837 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.837 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:20.837 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:20.837 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:20.837 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.837 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:21.096 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.096 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:21.096 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.096 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:21.354 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.354 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:21.354 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.355 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:21.614 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.614 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:21.614 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.614 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:21.614 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:21.614 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:21.614 11:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:21.872 11:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:22.131 11:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:23.065 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:23.065 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:23.065 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.065 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.324 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:23.583 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.583 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:23.583 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.583 11:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:23.841 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:23.842 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:23.842 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.842 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:24.101 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:24.101 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:24.101 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:24.101 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.101 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:24.101 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:24.101 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:24.360 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:24.618 11:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:25.552 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:25.552 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:25.552 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.552 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:25.811 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:25.811 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:25.811 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.811 11:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:25.811 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:25.811 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:25.811 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.811 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:26.070 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.070 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:26.070 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.070 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:26.329 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.329 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:26.329 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.329 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:26.329 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:26.329 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:26.329 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.330 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:26.588 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.588 11:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:26.845 11:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:26.845 11:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:27.101 11:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:27.101 11:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.474 11:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:28.732 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.732 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:28.732 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.732 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:28.989 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:28.990 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:28.990 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.990 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:29.247 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.247 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:29.247 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.247 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:29.247 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.247 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:29.247 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:29.505 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:29.762 11:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:30.696 11:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:30.696 11:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:30.696 11:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.696 11:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:30.955 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:30.955 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:30.955 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:30.955 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:31.213 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.472 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.472 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:31.472 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.472 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:31.730 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.730 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:31.730 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.730 11:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:31.730 11:41:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.730 11:41:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:31.730 11:41:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:31.988 11:41:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:32.244 11:41:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:33.178 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:33.178 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:33.178 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.178 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:33.437 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:33.437 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:33.437 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.437 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.695 11:41:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:33.953 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:33.953 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:33.953 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:33.953 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:34.211 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:34.211 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:34.211 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:34.211 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:34.211 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:34.211 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:34.211 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:34.530 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:34.788 11:41:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:35.722 11:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:35.722 11:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:35.722 11:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:35.722 11:41:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.015 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.417 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:36.674 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.674 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:36.674 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.674 11:41:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1129401 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1129401 ']' 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1129401 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1129401 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1129401' 00:35:36.931 killing process with pid 1129401 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1129401 00:35:36.931 11:41:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1129401 00:35:37.495 Connection closed with partial response: 00:35:37.495 00:35:37.495 00:35:38.070 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1129401 00:35:38.070 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:38.070 [2024-07-12 11:40:54.048023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:38.070 [2024-07-12 11:40:54.048120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129401 ] 00:35:38.070 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.070 [2024-07-12 11:40:54.146415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.070 [2024-07-12 11:40:54.371611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.070 Running I/O for 90 seconds... 00:35:38.070 [2024-07-12 11:41:08.109708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.070 [2024-07-12 11:41:08.109759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.070 [2024-07-12 11:41:08.109790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.070 [2024-07-12 11:41:08.109803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.070 [2024-07-12 11:41:08.109821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.070 [2024-07-12 11:41:08.109833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.070 [2024-07-12 11:41:08.109852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.070 [2024-07-12 11:41:08.109862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.070 [2024-07-12 11:41:08.109879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.070 [2024-07-12 11:41:08.109891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.109908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.109918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.109937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.109949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.109967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.109977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.109995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.110307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.071 [2024-07-12 11:41:08.111150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.071 [2024-07-12 11:41:08.111755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.071 [2024-07-12 11:41:08.111771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.111986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.111996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.112976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.112993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.072 [2024-07-12 11:41:08.113843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.072 [2024-07-12 11:41:08.113865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.113875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.113892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.113903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.113920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.113930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.113948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.113959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.113975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.113985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.073 [2024-07-12 11:41:08.114952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.073 [2024-07-12 11:41:08.114961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.114978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.114989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.115106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.115985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.115995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.074 [2024-07-12 11:41:08.116356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.074 [2024-07-12 11:41:08.116828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.074 [2024-07-12 11:41:08.116841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.116858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.116869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.116885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.116895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.116912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.116922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.116938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.116948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.116965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.116976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.116995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.117716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.117726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.118395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.118416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.118437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.118447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.075 [2024-07-12 11:41:08.118463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.075 [2024-07-12 11:41:08.118474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.118985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.118996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.119014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.119023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.119040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.119051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.119067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.119077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.119094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.119104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.119120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.119130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.119150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.076 [2024-07-12 11:41:08.128928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.076 [2024-07-12 11:41:08.128939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.128956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.128968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.128985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.128996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.129346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.129456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.129466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.077 [2024-07-12 11:41:08.130633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.077 [2024-07-12 11:41:08.130850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.077 [2024-07-12 11:41:08.130861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.130879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.130889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.130908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.130918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.130935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.130947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.130965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.130977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.130996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.131983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.131993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.132011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.132021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.132664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.132685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.078 [2024-07-12 11:41:08.132705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.078 [2024-07-12 11:41:08.132716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.132988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.132998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.079 [2024-07-12 11:41:08.133732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.079 [2024-07-12 11:41:08.133749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.133981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.133998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.134356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.134435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.134446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.135141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.135173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.135205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.080 [2024-07-12 11:41:08.135233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.080 [2024-07-12 11:41:08.135611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.080 [2024-07-12 11:41:08.135629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.135976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.135993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.081 [2024-07-12 11:41:08.136805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.081 [2024-07-12 11:41:08.136823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.136833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.136851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.136862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.136879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.136891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.136907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.136917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.136934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.136946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.137968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.137990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.082 [2024-07-12 11:41:08.138650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.082 [2024-07-12 11:41:08.138668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.138979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.138992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.083 [2024-07-12 11:41:08.139336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.139393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.139405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.140098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.140131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.140160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.140190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.083 [2024-07-12 11:41:08.140219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.083 [2024-07-12 11:41:08.140248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.083 [2024-07-12 11:41:08.140278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.083 [2024-07-12 11:41:08.140307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.083 [2024-07-12 11:41:08.140336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.083 [2024-07-12 11:41:08.140365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.083 [2024-07-12 11:41:08.140403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.083 [2024-07-12 11:41:08.140421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.084 [2024-07-12 11:41:08.140431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.084 [2024-07-12 11:41:08.140459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.084 [2024-07-12 11:41:08.140487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.084 [2024-07-12 11:41:08.140515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.084 [2024-07-12 11:41:08.140543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.084 [2024-07-12 11:41:08.140571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.084 [2024-07-12 11:41:08.140599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.140984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.140995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.084 [2024-07-12 11:41:08.141594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.084 [2024-07-12 11:41:08.141612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.141901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.141912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.142973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.142990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.085 [2024-07-12 11:41:08.143464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.085 [2024-07-12 11:41:08.143482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.143972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.143990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.144001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.144017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.144028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.144046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.144057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.144074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.144085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.144102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.144113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.144131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.144157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.144168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.148162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.148180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.148199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.148210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.148228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.148239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.148262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.148273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.148291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.148302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.148320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.086 [2024-07-12 11:41:08.148331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.148350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.148361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.149047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.149080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.149108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.149137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.149166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.086 [2024-07-12 11:41:08.149199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.086 [2024-07-12 11:41:08.149228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.086 [2024-07-12 11:41:08.149257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.086 [2024-07-12 11:41:08.149285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.086 [2024-07-12 11:41:08.149316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.086 [2024-07-12 11:41:08.149339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.087 [2024-07-12 11:41:08.149589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.149980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.149991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.087 [2024-07-12 11:41:08.150392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.087 [2024-07-12 11:41:08.150412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.150876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.150887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.151973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.151995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.088 [2024-07-12 11:41:08.152280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.088 [2024-07-12 11:41:08.152298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.152976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.152995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.153396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.153415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.089 [2024-07-12 11:41:08.153426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.154116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.154139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.154165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.154177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.154195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.154206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.154223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.154235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.089 [2024-07-12 11:41:08.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.089 [2024-07-12 11:41:08.154264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.090 [2024-07-12 11:41:08.154716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.154974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.154985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.090 [2024-07-12 11:41:08.155513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.090 [2024-07-12 11:41:08.155524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.155974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.155993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.156971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.156989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.091 [2024-07-12 11:41:08.157242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.091 [2024-07-12 11:41:08.157261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.157976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.157994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.092 [2024-07-12 11:41:08.158539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.092 [2024-07-12 11:41:08.158551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.158568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.093 [2024-07-12 11:41:08.159874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.159984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.159995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.093 [2024-07-12 11:41:08.160457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.093 [2024-07-12 11:41:08.160475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.160977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.160990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.161972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.161983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.094 [2024-07-12 11:41:08.162364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.094 [2024-07-12 11:41:08.162381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.162988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.162999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.095 [2024-07-12 11:41:08.163445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.095 [2024-07-12 11:41:08.163465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.163929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.163970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.163983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.096 [2024-07-12 11:41:08.164664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.164970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.164992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.165004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.165026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.165037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.165059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.165071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.096 [2024-07-12 11:41:08.165092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.096 [2024-07-12 11:41:08.165103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.165970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.165991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.166003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.166025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.166037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.166066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.166079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:08.166223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:08.166237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.097 [2024-07-12 11:41:20.935385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.097 [2024-07-12 11:41:20.935402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.935986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.935996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.936012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.936022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.936039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.936049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.936066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.098 [2024-07-12 11:41:20.936076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.098 [2024-07-12 11:41:20.938751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.098 [2024-07-12 11:41:20.938921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.098 [2024-07-12 11:41:20.938938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.099 [2024-07-12 11:41:20.938948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.099 [2024-07-12 11:41:20.938965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.099 [2024-07-12 11:41:20.938975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.099 Received shutdown signal, test time was about 27.119411 seconds 00:35:38.099 00:35:38.099 Latency(us) 00:35:38.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.099 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:38.099 Verification LBA range: start 0x0 length 0x4000 00:35:38.099 Nvme0n1 : 27.12 9079.38 35.47 0.00 0.00 14072.51 509.33 3078254.41 00:35:38.099 =================================================================================================================== 00:35:38.099 Total : 9079.38 35.47 0.00 0.00 14072.51 509.33 3078254.41 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:38.099 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:38.099 rmmod nvme_tcp 00:35:38.357 rmmod nvme_fabrics 00:35:38.358 rmmod nvme_keyring 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1128962 ']' 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1128962 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1128962 ']' 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1128962 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128962 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128962' 00:35:38.358 killing process with pid 1128962 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1128962 00:35:38.358 11:41:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1128962 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:39.734 11:41:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.265 11:41:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:42.265 00:35:42.265 real 0m41.522s 00:35:42.265 user 1m51.536s 00:35:42.265 sys 0m10.407s 00:35:42.265 11:41:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:42.265 11:41:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:42.265 ************************************ 00:35:42.265 END TEST nvmf_host_multipath_status 00:35:42.265 ************************************ 00:35:42.265 11:41:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:42.265 11:41:28 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:42.265 11:41:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:42.265 11:41:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:42.265 11:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.265 ************************************ 00:35:42.265 START TEST nvmf_discovery_remove_ifc 00:35:42.265 ************************************ 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:42.265 * Looking for test storage... 00:35:42.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.265 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:35:42.266 11:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:47.536 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:47.536 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:47.536 Found net devices under 0000:86:00.0: cvl_0_0 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:47.536 Found net devices under 0000:86:00.1: cvl_0_1 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.536 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:47.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:35:47.537 00:35:47.537 --- 10.0.0.2 ping statistics --- 00:35:47.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.537 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:35:47.537 00:35:47.537 --- 10.0.0.1 ping statistics --- 00:35:47.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.537 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1137937 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1137937 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1137937 ']' 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:47.537 11:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.537 [2024-07-12 11:41:33.463661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:47.537 [2024-07-12 11:41:33.463748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.537 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.537 [2024-07-12 11:41:33.572548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.537 [2024-07-12 11:41:33.785164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.537 [2024-07-12 11:41:33.785210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.537 [2024-07-12 11:41:33.785222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.537 [2024-07-12 11:41:33.785234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.537 [2024-07-12 11:41:33.785247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.537 [2024-07-12 11:41:33.785277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.106 [2024-07-12 11:41:34.283712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.106 [2024-07-12 11:41:34.291834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:48.106 null0 00:35:48.106 [2024-07-12 11:41:34.323848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1138179 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1138179 /tmp/host.sock 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1138179 ']' 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:48.106 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:48.106 11:41:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.106 [2024-07-12 11:41:34.419448] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:48.106 [2024-07-12 11:41:34.419530] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138179 ] 00:35:48.365 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.365 [2024-07-12 11:41:34.520876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.625 [2024-07-12 11:41:34.731981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.884 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:49.451 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.451 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:49.451 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.451 11:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:50.385 [2024-07-12 11:41:36.573819] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:50.385 [2024-07-12 11:41:36.573849] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:50.385 [2024-07-12 11:41:36.573882] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:50.385 [2024-07-12 11:41:36.700288] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:50.645 [2024-07-12 11:41:36.845837] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:50.645 [2024-07-12 11:41:36.845889] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:50.645 [2024-07-12 11:41:36.845949] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:50.645 [2024-07-12 11:41:36.845968] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:50.645 [2024-07-12 11:41:36.845995] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:50.645 [2024-07-12 11:41:36.853178] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500032d500 was disconnected and freed. delete nvme_qpair. 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:50.645 11:41:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:50.904 11:41:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:51.841 11:41:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:52.776 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.034 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:53.034 11:41:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:53.970 11:41:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:54.905 11:41:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.282 [2024-07-12 11:41:42.287199] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:56.282 [2024-07-12 11:41:42.287259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.282 [2024-07-12 11:41:42.287276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.282 [2024-07-12 11:41:42.287292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.282 [2024-07-12 11:41:42.287303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.282 [2024-07-12 11:41:42.287314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.282 [2024-07-12 11:41:42.287326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.282 [2024-07-12 11:41:42.287337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.282 [2024-07-12 11:41:42.287346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.282 [2024-07-12 11:41:42.287357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.282 [2024-07-12 11:41:42.287367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.282 [2024-07-12 11:41:42.287384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:35:56.282 [2024-07-12 11:41:42.297216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:35:56.282 [2024-07-12 11:41:42.307258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:56.282 11:41:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:57.216 [2024-07-12 11:41:43.338472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:57.216 [2024-07-12 11:41:43.338551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:35:57.216 [2024-07-12 11:41:43.338576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:35:57.216 [2024-07-12 11:41:43.338626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:35:57.216 [2024-07-12 11:41:43.339272] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:57.216 [2024-07-12 11:41:43.339306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:57.216 [2024-07-12 11:41:43.339326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:57.216 [2024-07-12 11:41:43.339343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:57.216 [2024-07-12 11:41:43.339391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:57.216 [2024-07-12 11:41:43.339410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:57.216 11:41:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:58.154 [2024-07-12 11:41:44.341918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:58.154 [2024-07-12 11:41:44.341949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:58.154 [2024-07-12 11:41:44.341960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:58.154 [2024-07-12 11:41:44.341970] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:58.154 [2024-07-12 11:41:44.341988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:58.154 [2024-07-12 11:41:44.342016] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:58.154 [2024-07-12 11:41:44.342055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.154 [2024-07-12 11:41:44.342070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.154 [2024-07-12 11:41:44.342085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.154 [2024-07-12 11:41:44.342095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.154 [2024-07-12 11:41:44.342105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.154 [2024-07-12 11:41:44.342115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.154 [2024-07-12 11:41:44.342125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.154 [2024-07-12 11:41:44.342138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.154 [2024-07-12 11:41:44.342149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:58.154 [2024-07-12 11:41:44.342158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:58.154 [2024-07-12 11:41:44.342167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:58.154 [2024-07-12 11:41:44.342203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d000 (9): Bad file descriptor 00:35:58.154 [2024-07-12 11:41:44.343208] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:58.154 [2024-07-12 11:41:44.343231] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:58.154 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.412 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:58.412 11:41:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:59.396 11:41:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:00.332 [2024-07-12 11:41:46.400042] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:00.332 [2024-07-12 11:41:46.400066] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:00.332 [2024-07-12 11:41:46.400088] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:00.332 [2024-07-12 11:41:46.529525] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:00.332 11:41:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:00.592 [2024-07-12 11:41:46.753448] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:00.592 [2024-07-12 11:41:46.753497] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:00.592 [2024-07-12 11:41:46.753556] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:00.592 [2024-07-12 11:41:46.753576] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:00.592 [2024-07-12 11:41:46.753587] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:00.592 [2024-07-12 11:41:46.799552] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500032dc80 was disconnected and freed. delete nvme_qpair. 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1138179 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1138179 ']' 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1138179 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1138179 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1138179' 00:36:01.526 killing process with pid 1138179 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1138179 00:36:01.526 11:41:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1138179 00:36:02.459 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:02.460 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:02.460 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:02.718 rmmod nvme_tcp 00:36:02.718 rmmod nvme_fabrics 00:36:02.718 rmmod nvme_keyring 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1137937 ']' 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1137937 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1137937 ']' 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1137937 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137937 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137937' 00:36:02.718 killing process with pid 1137937 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1137937 00:36:02.718 11:41:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1137937 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:04.096 11:41:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.998 11:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:05.998 00:36:05.998 real 0m24.130s 00:36:05.998 user 0m32.033s 00:36:05.998 sys 0m5.238s 00:36:05.998 11:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:05.998 11:41:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:05.998 ************************************ 00:36:05.998 END TEST nvmf_discovery_remove_ifc 00:36:05.998 ************************************ 00:36:05.998 11:41:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:05.998 11:41:52 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:05.998 11:41:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:05.998 11:41:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:05.998 11:41:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:05.998 ************************************ 00:36:05.998 START TEST nvmf_identify_kernel_target 00:36:05.998 ************************************ 00:36:05.998 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:06.257 * Looking for test storage... 00:36:06.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:06.257 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:36:06.258 11:41:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:11.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:11.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:11.530 Found net devices under 0000:86:00.0: cvl_0_0 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:11.530 Found net devices under 0000:86:00.1: cvl_0_1 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:11.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:36:11.530 00:36:11.530 --- 10.0.0.2 ping statistics --- 00:36:11.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.530 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:11.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:36:11.530 00:36:11.530 --- 10.0.0.1 ping statistics --- 00:36:11.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.530 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.530 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:11.531 11:41:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:14.076 Waiting for block devices as requested 00:36:14.076 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:14.076 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:14.076 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:14.076 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:14.076 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:14.076 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:14.076 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:14.334 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:14.334 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:14.334 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:14.334 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:14.593 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:14.593 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:14.593 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:14.593 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:14.850 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:14.850 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:14.850 No valid GPT data, bailing 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:36:14.850 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:15.108 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:15.108 00:36:15.108 Discovery Log Number of Records 2, Generation counter 2 00:36:15.108 =====Discovery Log Entry 0====== 00:36:15.108 trtype: tcp 00:36:15.108 adrfam: ipv4 00:36:15.108 subtype: current discovery subsystem 00:36:15.108 treq: not specified, sq flow control disable supported 00:36:15.108 portid: 1 00:36:15.108 trsvcid: 4420 00:36:15.108 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:15.108 traddr: 10.0.0.1 00:36:15.108 eflags: none 00:36:15.108 sectype: none 00:36:15.108 =====Discovery Log Entry 1====== 00:36:15.108 trtype: tcp 00:36:15.108 adrfam: ipv4 00:36:15.108 subtype: nvme subsystem 00:36:15.108 treq: not specified, sq flow control disable supported 00:36:15.108 portid: 1 00:36:15.108 trsvcid: 4420 00:36:15.108 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:15.108 traddr: 10.0.0.1 00:36:15.108 eflags: none 00:36:15.108 sectype: none 00:36:15.108 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:15.108 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:15.108 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.108 ===================================================== 00:36:15.108 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:15.108 ===================================================== 00:36:15.108 Controller Capabilities/Features 00:36:15.108 ================================ 00:36:15.108 Vendor ID: 0000 00:36:15.108 Subsystem Vendor ID: 0000 00:36:15.108 Serial Number: e4b57c503a8a55189179 00:36:15.108 Model Number: Linux 00:36:15.108 Firmware Version: 6.7.0-68 00:36:15.108 Recommended Arb Burst: 0 00:36:15.108 IEEE OUI Identifier: 00 00 00 00:36:15.108 Multi-path I/O 00:36:15.108 May have multiple subsystem ports: No 00:36:15.108 May have multiple controllers: No 00:36:15.108 Associated with SR-IOV VF: No 00:36:15.108 Max Data Transfer Size: Unlimited 00:36:15.108 Max Number of Namespaces: 0 00:36:15.108 Max Number of I/O Queues: 1024 00:36:15.108 NVMe Specification Version (VS): 1.3 00:36:15.108 NVMe Specification Version (Identify): 1.3 00:36:15.108 Maximum Queue Entries: 1024 00:36:15.108 Contiguous Queues Required: No 00:36:15.108 Arbitration Mechanisms Supported 00:36:15.108 Weighted Round Robin: Not Supported 00:36:15.108 Vendor Specific: Not Supported 00:36:15.108 Reset Timeout: 7500 ms 00:36:15.108 Doorbell Stride: 4 bytes 00:36:15.108 NVM Subsystem Reset: Not Supported 00:36:15.108 Command Sets Supported 00:36:15.108 NVM Command Set: Supported 00:36:15.108 Boot Partition: Not Supported 00:36:15.108 Memory Page Size Minimum: 4096 bytes 00:36:15.108 Memory Page Size Maximum: 4096 bytes 00:36:15.108 Persistent Memory Region: Not Supported 00:36:15.108 Optional Asynchronous Events Supported 00:36:15.108 Namespace Attribute Notices: Not Supported 00:36:15.108 Firmware Activation Notices: Not Supported 00:36:15.108 ANA Change Notices: Not Supported 00:36:15.108 PLE Aggregate Log Change Notices: Not Supported 00:36:15.108 LBA Status Info Alert Notices: Not Supported 00:36:15.108 EGE Aggregate Log Change Notices: Not Supported 00:36:15.108 Normal NVM Subsystem Shutdown event: Not Supported 00:36:15.108 Zone Descriptor Change Notices: Not Supported 00:36:15.108 Discovery Log Change Notices: Supported 00:36:15.108 Controller Attributes 00:36:15.108 128-bit Host Identifier: Not Supported 00:36:15.108 Non-Operational Permissive Mode: Not Supported 00:36:15.108 NVM Sets: Not Supported 00:36:15.108 Read Recovery Levels: Not Supported 00:36:15.108 Endurance Groups: Not Supported 00:36:15.108 Predictable Latency Mode: Not Supported 00:36:15.108 Traffic Based Keep ALive: Not Supported 00:36:15.108 Namespace Granularity: Not Supported 00:36:15.108 SQ Associations: Not Supported 00:36:15.108 UUID List: Not Supported 00:36:15.108 Multi-Domain Subsystem: Not Supported 00:36:15.108 Fixed Capacity Management: Not Supported 00:36:15.108 Variable Capacity Management: Not Supported 00:36:15.108 Delete Endurance Group: Not Supported 00:36:15.108 Delete NVM Set: Not Supported 00:36:15.108 Extended LBA Formats Supported: Not Supported 00:36:15.108 Flexible Data Placement Supported: Not Supported 00:36:15.108 00:36:15.108 Controller Memory Buffer Support 00:36:15.108 ================================ 00:36:15.108 Supported: No 00:36:15.108 00:36:15.108 Persistent Memory Region Support 00:36:15.108 ================================ 00:36:15.108 Supported: No 00:36:15.108 00:36:15.108 Admin Command Set Attributes 00:36:15.108 ============================ 00:36:15.108 Security Send/Receive: Not Supported 00:36:15.108 Format NVM: Not Supported 00:36:15.108 Firmware Activate/Download: Not Supported 00:36:15.108 Namespace Management: Not Supported 00:36:15.108 Device Self-Test: Not Supported 00:36:15.108 Directives: Not Supported 00:36:15.108 NVMe-MI: Not Supported 00:36:15.108 Virtualization Management: Not Supported 00:36:15.108 Doorbell Buffer Config: Not Supported 00:36:15.108 Get LBA Status Capability: Not Supported 00:36:15.109 Command & Feature Lockdown Capability: Not Supported 00:36:15.109 Abort Command Limit: 1 00:36:15.109 Async Event Request Limit: 1 00:36:15.109 Number of Firmware Slots: N/A 00:36:15.109 Firmware Slot 1 Read-Only: N/A 00:36:15.109 Firmware Activation Without Reset: N/A 00:36:15.109 Multiple Update Detection Support: N/A 00:36:15.109 Firmware Update Granularity: No Information Provided 00:36:15.109 Per-Namespace SMART Log: No 00:36:15.109 Asymmetric Namespace Access Log Page: Not Supported 00:36:15.109 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:15.109 Command Effects Log Page: Not Supported 00:36:15.109 Get Log Page Extended Data: Supported 00:36:15.109 Telemetry Log Pages: Not Supported 00:36:15.109 Persistent Event Log Pages: Not Supported 00:36:15.109 Supported Log Pages Log Page: May Support 00:36:15.109 Commands Supported & Effects Log Page: Not Supported 00:36:15.109 Feature Identifiers & Effects Log Page:May Support 00:36:15.109 NVMe-MI Commands & Effects Log Page: May Support 00:36:15.109 Data Area 4 for Telemetry Log: Not Supported 00:36:15.109 Error Log Page Entries Supported: 1 00:36:15.109 Keep Alive: Not Supported 00:36:15.109 00:36:15.109 NVM Command Set Attributes 00:36:15.109 ========================== 00:36:15.109 Submission Queue Entry Size 00:36:15.109 Max: 1 00:36:15.109 Min: 1 00:36:15.109 Completion Queue Entry Size 00:36:15.109 Max: 1 00:36:15.109 Min: 1 00:36:15.109 Number of Namespaces: 0 00:36:15.109 Compare Command: Not Supported 00:36:15.109 Write Uncorrectable Command: Not Supported 00:36:15.109 Dataset Management Command: Not Supported 00:36:15.109 Write Zeroes Command: Not Supported 00:36:15.109 Set Features Save Field: Not Supported 00:36:15.109 Reservations: Not Supported 00:36:15.109 Timestamp: Not Supported 00:36:15.109 Copy: Not Supported 00:36:15.109 Volatile Write Cache: Not Present 00:36:15.109 Atomic Write Unit (Normal): 1 00:36:15.109 Atomic Write Unit (PFail): 1 00:36:15.109 Atomic Compare & Write Unit: 1 00:36:15.109 Fused Compare & Write: Not Supported 00:36:15.109 Scatter-Gather List 00:36:15.109 SGL Command Set: Supported 00:36:15.109 SGL Keyed: Not Supported 00:36:15.109 SGL Bit Bucket Descriptor: Not Supported 00:36:15.109 SGL Metadata Pointer: Not Supported 00:36:15.109 Oversized SGL: Not Supported 00:36:15.109 SGL Metadata Address: Not Supported 00:36:15.109 SGL Offset: Supported 00:36:15.109 Transport SGL Data Block: Not Supported 00:36:15.109 Replay Protected Memory Block: Not Supported 00:36:15.109 00:36:15.109 Firmware Slot Information 00:36:15.109 ========================= 00:36:15.109 Active slot: 0 00:36:15.109 00:36:15.109 00:36:15.109 Error Log 00:36:15.109 ========= 00:36:15.109 00:36:15.109 Active Namespaces 00:36:15.109 ================= 00:36:15.109 Discovery Log Page 00:36:15.109 ================== 00:36:15.109 Generation Counter: 2 00:36:15.109 Number of Records: 2 00:36:15.109 Record Format: 0 00:36:15.109 00:36:15.109 Discovery Log Entry 0 00:36:15.109 ---------------------- 00:36:15.109 Transport Type: 3 (TCP) 00:36:15.109 Address Family: 1 (IPv4) 00:36:15.109 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:15.109 Entry Flags: 00:36:15.109 Duplicate Returned Information: 0 00:36:15.109 Explicit Persistent Connection Support for Discovery: 0 00:36:15.109 Transport Requirements: 00:36:15.109 Secure Channel: Not Specified 00:36:15.109 Port ID: 1 (0x0001) 00:36:15.109 Controller ID: 65535 (0xffff) 00:36:15.109 Admin Max SQ Size: 32 00:36:15.109 Transport Service Identifier: 4420 00:36:15.109 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:15.109 Transport Address: 10.0.0.1 00:36:15.109 Discovery Log Entry 1 00:36:15.109 ---------------------- 00:36:15.109 Transport Type: 3 (TCP) 00:36:15.109 Address Family: 1 (IPv4) 00:36:15.109 Subsystem Type: 2 (NVM Subsystem) 00:36:15.109 Entry Flags: 00:36:15.109 Duplicate Returned Information: 0 00:36:15.109 Explicit Persistent Connection Support for Discovery: 0 00:36:15.109 Transport Requirements: 00:36:15.109 Secure Channel: Not Specified 00:36:15.109 Port ID: 1 (0x0001) 00:36:15.109 Controller ID: 65535 (0xffff) 00:36:15.109 Admin Max SQ Size: 32 00:36:15.109 Transport Service Identifier: 4420 00:36:15.109 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:15.109 Transport Address: 10.0.0.1 00:36:15.109 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.368 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.369 get_feature(0x01) failed 00:36:15.369 get_feature(0x02) failed 00:36:15.369 get_feature(0x04) failed 00:36:15.369 ===================================================== 00:36:15.369 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:15.369 ===================================================== 00:36:15.369 Controller Capabilities/Features 00:36:15.369 ================================ 00:36:15.369 Vendor ID: 0000 00:36:15.369 Subsystem Vendor ID: 0000 00:36:15.369 Serial Number: 9713eca22a4e8fe6adea 00:36:15.369 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:15.369 Firmware Version: 6.7.0-68 00:36:15.369 Recommended Arb Burst: 6 00:36:15.369 IEEE OUI Identifier: 00 00 00 00:36:15.369 Multi-path I/O 00:36:15.369 May have multiple subsystem ports: Yes 00:36:15.369 May have multiple controllers: Yes 00:36:15.369 Associated with SR-IOV VF: No 00:36:15.369 Max Data Transfer Size: Unlimited 00:36:15.369 Max Number of Namespaces: 1024 00:36:15.369 Max Number of I/O Queues: 128 00:36:15.369 NVMe Specification Version (VS): 1.3 00:36:15.369 NVMe Specification Version (Identify): 1.3 00:36:15.369 Maximum Queue Entries: 1024 00:36:15.369 Contiguous Queues Required: No 00:36:15.369 Arbitration Mechanisms Supported 00:36:15.369 Weighted Round Robin: Not Supported 00:36:15.369 Vendor Specific: Not Supported 00:36:15.369 Reset Timeout: 7500 ms 00:36:15.369 Doorbell Stride: 4 bytes 00:36:15.369 NVM Subsystem Reset: Not Supported 00:36:15.369 Command Sets Supported 00:36:15.369 NVM Command Set: Supported 00:36:15.369 Boot Partition: Not Supported 00:36:15.369 Memory Page Size Minimum: 4096 bytes 00:36:15.369 Memory Page Size Maximum: 4096 bytes 00:36:15.369 Persistent Memory Region: Not Supported 00:36:15.369 Optional Asynchronous Events Supported 00:36:15.369 Namespace Attribute Notices: Supported 00:36:15.369 Firmware Activation Notices: Not Supported 00:36:15.369 ANA Change Notices: Supported 00:36:15.369 PLE Aggregate Log Change Notices: Not Supported 00:36:15.369 LBA Status Info Alert Notices: Not Supported 00:36:15.369 EGE Aggregate Log Change Notices: Not Supported 00:36:15.369 Normal NVM Subsystem Shutdown event: Not Supported 00:36:15.369 Zone Descriptor Change Notices: Not Supported 00:36:15.369 Discovery Log Change Notices: Not Supported 00:36:15.369 Controller Attributes 00:36:15.369 128-bit Host Identifier: Supported 00:36:15.369 Non-Operational Permissive Mode: Not Supported 00:36:15.369 NVM Sets: Not Supported 00:36:15.369 Read Recovery Levels: Not Supported 00:36:15.369 Endurance Groups: Not Supported 00:36:15.369 Predictable Latency Mode: Not Supported 00:36:15.369 Traffic Based Keep ALive: Supported 00:36:15.369 Namespace Granularity: Not Supported 00:36:15.369 SQ Associations: Not Supported 00:36:15.369 UUID List: Not Supported 00:36:15.369 Multi-Domain Subsystem: Not Supported 00:36:15.369 Fixed Capacity Management: Not Supported 00:36:15.369 Variable Capacity Management: Not Supported 00:36:15.369 Delete Endurance Group: Not Supported 00:36:15.369 Delete NVM Set: Not Supported 00:36:15.369 Extended LBA Formats Supported: Not Supported 00:36:15.369 Flexible Data Placement Supported: Not Supported 00:36:15.369 00:36:15.369 Controller Memory Buffer Support 00:36:15.369 ================================ 00:36:15.369 Supported: No 00:36:15.369 00:36:15.369 Persistent Memory Region Support 00:36:15.369 ================================ 00:36:15.369 Supported: No 00:36:15.369 00:36:15.369 Admin Command Set Attributes 00:36:15.369 ============================ 00:36:15.369 Security Send/Receive: Not Supported 00:36:15.369 Format NVM: Not Supported 00:36:15.369 Firmware Activate/Download: Not Supported 00:36:15.369 Namespace Management: Not Supported 00:36:15.369 Device Self-Test: Not Supported 00:36:15.369 Directives: Not Supported 00:36:15.369 NVMe-MI: Not Supported 00:36:15.369 Virtualization Management: Not Supported 00:36:15.369 Doorbell Buffer Config: Not Supported 00:36:15.369 Get LBA Status Capability: Not Supported 00:36:15.369 Command & Feature Lockdown Capability: Not Supported 00:36:15.369 Abort Command Limit: 4 00:36:15.369 Async Event Request Limit: 4 00:36:15.369 Number of Firmware Slots: N/A 00:36:15.369 Firmware Slot 1 Read-Only: N/A 00:36:15.369 Firmware Activation Without Reset: N/A 00:36:15.369 Multiple Update Detection Support: N/A 00:36:15.369 Firmware Update Granularity: No Information Provided 00:36:15.369 Per-Namespace SMART Log: Yes 00:36:15.369 Asymmetric Namespace Access Log Page: Supported 00:36:15.369 ANA Transition Time : 10 sec 00:36:15.369 00:36:15.369 Asymmetric Namespace Access Capabilities 00:36:15.369 ANA Optimized State : Supported 00:36:15.369 ANA Non-Optimized State : Supported 00:36:15.369 ANA Inaccessible State : Supported 00:36:15.369 ANA Persistent Loss State : Supported 00:36:15.369 ANA Change State : Supported 00:36:15.369 ANAGRPID is not changed : No 00:36:15.369 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:15.369 00:36:15.369 ANA Group Identifier Maximum : 128 00:36:15.369 Number of ANA Group Identifiers : 128 00:36:15.369 Max Number of Allowed Namespaces : 1024 00:36:15.369 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:15.369 Command Effects Log Page: Supported 00:36:15.369 Get Log Page Extended Data: Supported 00:36:15.369 Telemetry Log Pages: Not Supported 00:36:15.369 Persistent Event Log Pages: Not Supported 00:36:15.369 Supported Log Pages Log Page: May Support 00:36:15.369 Commands Supported & Effects Log Page: Not Supported 00:36:15.369 Feature Identifiers & Effects Log Page:May Support 00:36:15.369 NVMe-MI Commands & Effects Log Page: May Support 00:36:15.369 Data Area 4 for Telemetry Log: Not Supported 00:36:15.369 Error Log Page Entries Supported: 128 00:36:15.369 Keep Alive: Supported 00:36:15.369 Keep Alive Granularity: 1000 ms 00:36:15.369 00:36:15.369 NVM Command Set Attributes 00:36:15.369 ========================== 00:36:15.369 Submission Queue Entry Size 00:36:15.369 Max: 64 00:36:15.369 Min: 64 00:36:15.369 Completion Queue Entry Size 00:36:15.369 Max: 16 00:36:15.369 Min: 16 00:36:15.369 Number of Namespaces: 1024 00:36:15.369 Compare Command: Not Supported 00:36:15.369 Write Uncorrectable Command: Not Supported 00:36:15.369 Dataset Management Command: Supported 00:36:15.369 Write Zeroes Command: Supported 00:36:15.369 Set Features Save Field: Not Supported 00:36:15.369 Reservations: Not Supported 00:36:15.369 Timestamp: Not Supported 00:36:15.369 Copy: Not Supported 00:36:15.369 Volatile Write Cache: Present 00:36:15.369 Atomic Write Unit (Normal): 1 00:36:15.369 Atomic Write Unit (PFail): 1 00:36:15.369 Atomic Compare & Write Unit: 1 00:36:15.369 Fused Compare & Write: Not Supported 00:36:15.369 Scatter-Gather List 00:36:15.369 SGL Command Set: Supported 00:36:15.369 SGL Keyed: Not Supported 00:36:15.369 SGL Bit Bucket Descriptor: Not Supported 00:36:15.369 SGL Metadata Pointer: Not Supported 00:36:15.369 Oversized SGL: Not Supported 00:36:15.369 SGL Metadata Address: Not Supported 00:36:15.369 SGL Offset: Supported 00:36:15.369 Transport SGL Data Block: Not Supported 00:36:15.369 Replay Protected Memory Block: Not Supported 00:36:15.369 00:36:15.369 Firmware Slot Information 00:36:15.369 ========================= 00:36:15.369 Active slot: 0 00:36:15.369 00:36:15.369 Asymmetric Namespace Access 00:36:15.369 =========================== 00:36:15.369 Change Count : 0 00:36:15.369 Number of ANA Group Descriptors : 1 00:36:15.369 ANA Group Descriptor : 0 00:36:15.369 ANA Group ID : 1 00:36:15.369 Number of NSID Values : 1 00:36:15.369 Change Count : 0 00:36:15.369 ANA State : 1 00:36:15.369 Namespace Identifier : 1 00:36:15.369 00:36:15.369 Commands Supported and Effects 00:36:15.369 ============================== 00:36:15.369 Admin Commands 00:36:15.369 -------------- 00:36:15.369 Get Log Page (02h): Supported 00:36:15.369 Identify (06h): Supported 00:36:15.369 Abort (08h): Supported 00:36:15.369 Set Features (09h): Supported 00:36:15.369 Get Features (0Ah): Supported 00:36:15.369 Asynchronous Event Request (0Ch): Supported 00:36:15.369 Keep Alive (18h): Supported 00:36:15.369 I/O Commands 00:36:15.369 ------------ 00:36:15.369 Flush (00h): Supported 00:36:15.369 Write (01h): Supported LBA-Change 00:36:15.369 Read (02h): Supported 00:36:15.369 Write Zeroes (08h): Supported LBA-Change 00:36:15.369 Dataset Management (09h): Supported 00:36:15.369 00:36:15.369 Error Log 00:36:15.369 ========= 00:36:15.369 Entry: 0 00:36:15.369 Error Count: 0x3 00:36:15.369 Submission Queue Id: 0x0 00:36:15.369 Command Id: 0x5 00:36:15.369 Phase Bit: 0 00:36:15.369 Status Code: 0x2 00:36:15.369 Status Code Type: 0x0 00:36:15.369 Do Not Retry: 1 00:36:15.369 Error Location: 0x28 00:36:15.369 LBA: 0x0 00:36:15.369 Namespace: 0x0 00:36:15.369 Vendor Log Page: 0x0 00:36:15.369 ----------- 00:36:15.369 Entry: 1 00:36:15.369 Error Count: 0x2 00:36:15.369 Submission Queue Id: 0x0 00:36:15.369 Command Id: 0x5 00:36:15.369 Phase Bit: 0 00:36:15.369 Status Code: 0x2 00:36:15.370 Status Code Type: 0x0 00:36:15.370 Do Not Retry: 1 00:36:15.370 Error Location: 0x28 00:36:15.370 LBA: 0x0 00:36:15.370 Namespace: 0x0 00:36:15.370 Vendor Log Page: 0x0 00:36:15.370 ----------- 00:36:15.370 Entry: 2 00:36:15.370 Error Count: 0x1 00:36:15.370 Submission Queue Id: 0x0 00:36:15.370 Command Id: 0x4 00:36:15.370 Phase Bit: 0 00:36:15.370 Status Code: 0x2 00:36:15.370 Status Code Type: 0x0 00:36:15.370 Do Not Retry: 1 00:36:15.370 Error Location: 0x28 00:36:15.370 LBA: 0x0 00:36:15.370 Namespace: 0x0 00:36:15.370 Vendor Log Page: 0x0 00:36:15.370 00:36:15.370 Number of Queues 00:36:15.370 ================ 00:36:15.370 Number of I/O Submission Queues: 128 00:36:15.370 Number of I/O Completion Queues: 128 00:36:15.370 00:36:15.370 ZNS Specific Controller Data 00:36:15.370 ============================ 00:36:15.370 Zone Append Size Limit: 0 00:36:15.370 00:36:15.370 00:36:15.370 Active Namespaces 00:36:15.370 ================= 00:36:15.370 get_feature(0x05) failed 00:36:15.370 Namespace ID:1 00:36:15.370 Command Set Identifier: NVM (00h) 00:36:15.370 Deallocate: Supported 00:36:15.370 Deallocated/Unwritten Error: Not Supported 00:36:15.370 Deallocated Read Value: Unknown 00:36:15.370 Deallocate in Write Zeroes: Not Supported 00:36:15.370 Deallocated Guard Field: 0xFFFF 00:36:15.370 Flush: Supported 00:36:15.370 Reservation: Not Supported 00:36:15.370 Namespace Sharing Capabilities: Multiple Controllers 00:36:15.370 Size (in LBAs): 1953525168 (931GiB) 00:36:15.370 Capacity (in LBAs): 1953525168 (931GiB) 00:36:15.370 Utilization (in LBAs): 1953525168 (931GiB) 00:36:15.370 UUID: 0bda7b47-fbcc-4de7-8002-f720ed14dd8b 00:36:15.370 Thin Provisioning: Not Supported 00:36:15.370 Per-NS Atomic Units: Yes 00:36:15.370 Atomic Boundary Size (Normal): 0 00:36:15.370 Atomic Boundary Size (PFail): 0 00:36:15.370 Atomic Boundary Offset: 0 00:36:15.370 NGUID/EUI64 Never Reused: No 00:36:15.370 ANA group ID: 1 00:36:15.370 Namespace Write Protected: No 00:36:15.370 Number of LBA Formats: 1 00:36:15.370 Current LBA Format: LBA Format #00 00:36:15.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:15.370 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:15.370 rmmod nvme_tcp 00:36:15.370 rmmod nvme_fabrics 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:15.370 11:42:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.272 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:17.532 11:42:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:20.064 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:20.064 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:20.632 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:20.890 00:36:20.890 real 0m14.752s 00:36:20.890 user 0m3.407s 00:36:20.890 sys 0m7.579s 00:36:20.890 11:42:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:20.890 11:42:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:20.890 ************************************ 00:36:20.890 END TEST nvmf_identify_kernel_target 00:36:20.890 ************************************ 00:36:20.890 11:42:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:20.890 11:42:07 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:20.890 11:42:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:20.890 11:42:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:20.890 11:42:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.890 ************************************ 00:36:20.890 START TEST nvmf_auth_host 00:36:20.890 ************************************ 00:36:20.890 11:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:21.149 * Looking for test storage... 00:36:21.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.149 11:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:21.150 11:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.150 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:21.150 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:21.150 11:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:36:21.150 11:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:26.419 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:26.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:26.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:26.420 Found net devices under 0000:86:00.0: cvl_0_0 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:26.420 Found net devices under 0000:86:00.1: cvl_0_1 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:26.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:36:26.420 00:36:26.420 --- 10.0.0.2 ping statistics --- 00:36:26.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.420 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:36:26.420 00:36:26.420 --- 10.0.0.1 ping statistics --- 00:36:26.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.420 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1150677 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1150677 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1150677 ']' 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:26.420 11:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:26.987 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:36:26.987 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:26.987 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:26.987 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:26.987 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df86912cb9dda193147af71649f602a2 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.x9E 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df86912cb9dda193147af71649f602a2 0 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df86912cb9dda193147af71649f602a2 0 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df86912cb9dda193147af71649f602a2 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.x9E 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.x9E 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.x9E 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2d9b55b0b9ad3f686fa8e07b900df1b823f453ce0282bbfeea15477a061b8043 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hYF 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2d9b55b0b9ad3f686fa8e07b900df1b823f453ce0282bbfeea15477a061b8043 3 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2d9b55b0b9ad3f686fa8e07b900df1b823f453ce0282bbfeea15477a061b8043 3 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2d9b55b0b9ad3f686fa8e07b900df1b823f453ce0282bbfeea15477a061b8043 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hYF 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hYF 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hYF 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c8d5769d8ecdacc39d77dfd6191829ef7a3177a3fe06fcd 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V9W 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c8d5769d8ecdacc39d77dfd6191829ef7a3177a3fe06fcd 0 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c8d5769d8ecdacc39d77dfd6191829ef7a3177a3fe06fcd 0 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c8d5769d8ecdacc39d77dfd6191829ef7a3177a3fe06fcd 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V9W 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V9W 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.V9W 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=285e49271b163243e1fcb6a23fd61c3a2e6956e78f0685ef 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jaB 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 285e49271b163243e1fcb6a23fd61c3a2e6956e78f0685ef 2 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 285e49271b163243e1fcb6a23fd61c3a2e6956e78f0685ef 2 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=285e49271b163243e1fcb6a23fd61c3a2e6956e78f0685ef 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jaB 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jaB 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jaB 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8dd5e4c875bdccf0e50530fa244bbd09 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZPj 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8dd5e4c875bdccf0e50530fa244bbd09 1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8dd5e4c875bdccf0e50530fa244bbd09 1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8dd5e4c875bdccf0e50530fa244bbd09 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:27.245 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZPj 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZPj 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZPj 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f858925220f05ff1d2ede72808b1e00 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ShY 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f858925220f05ff1d2ede72808b1e00 1 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f858925220f05ff1d2ede72808b1e00 1 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f858925220f05ff1d2ede72808b1e00 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ShY 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ShY 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ShY 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=60f39de4818b1516205b411c4d0e7139022b9e56af9bfb9f 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tWC 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 60f39de4818b1516205b411c4d0e7139022b9e56af9bfb9f 2 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 60f39de4818b1516205b411c4d0e7139022b9e56af9bfb9f 2 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.504 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=60f39de4818b1516205b411c4d0e7139022b9e56af9bfb9f 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tWC 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tWC 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.tWC 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b1e1aaf3b319e9f9a2d1e2fab3a308d 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3Lg 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b1e1aaf3b319e9f9a2d1e2fab3a308d 0 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b1e1aaf3b319e9f9a2d1e2fab3a308d 0 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b1e1aaf3b319e9f9a2d1e2fab3a308d 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3Lg 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3Lg 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3Lg 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5a0dd9fbab43209489dfb4804a26bcd105e906ef38d14962ffc90fb91ccd4008 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ap3 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5a0dd9fbab43209489dfb4804a26bcd105e906ef38d14962ffc90fb91ccd4008 3 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5a0dd9fbab43209489dfb4804a26bcd105e906ef38d14962ffc90fb91ccd4008 3 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5a0dd9fbab43209489dfb4804a26bcd105e906ef38d14962ffc90fb91ccd4008 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:27.505 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:27.767 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ap3 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ap3 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ap3 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1150677 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1150677 ']' 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:27.768 11:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x9E 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hYF ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hYF 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.V9W 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jaB ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jaB 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZPj 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ShY ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ShY 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tWC 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3Lg ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3Lg 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ap3 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:28.089 11:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:30.641 Waiting for block devices as requested 00:36:30.641 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:30.641 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:30.641 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:30.641 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:30.641 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:30.901 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:30.901 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:30.901 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:30.901 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:31.160 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:31.160 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:31.160 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:31.160 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:31.418 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:31.418 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:31.418 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:31.418 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:31.985 No valid GPT data, bailing 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:31.985 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:32.244 00:36:32.244 Discovery Log Number of Records 2, Generation counter 2 00:36:32.244 =====Discovery Log Entry 0====== 00:36:32.244 trtype: tcp 00:36:32.244 adrfam: ipv4 00:36:32.244 subtype: current discovery subsystem 00:36:32.244 treq: not specified, sq flow control disable supported 00:36:32.244 portid: 1 00:36:32.244 trsvcid: 4420 00:36:32.244 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:32.244 traddr: 10.0.0.1 00:36:32.244 eflags: none 00:36:32.244 sectype: none 00:36:32.244 =====Discovery Log Entry 1====== 00:36:32.244 trtype: tcp 00:36:32.244 adrfam: ipv4 00:36:32.244 subtype: nvme subsystem 00:36:32.244 treq: not specified, sq flow control disable supported 00:36:32.244 portid: 1 00:36:32.244 trsvcid: 4420 00:36:32.244 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:32.244 traddr: 10.0.0.1 00:36:32.244 eflags: none 00:36:32.244 sectype: none 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:32.244 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.245 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.503 nvme0n1 00:36:32.503 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.503 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.503 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.503 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.503 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.504 nvme0n1 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.504 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.762 11:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.762 nvme0n1 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.762 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.021 nvme0n1 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.021 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.280 nvme0n1 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.280 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.540 nvme0n1 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.540 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.799 nvme0n1 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.799 11:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:33.799 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.800 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.059 nvme0n1 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.059 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.319 nvme0n1 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.319 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.578 nvme0n1 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.579 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.839 nvme0n1 00:36:34.839 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.839 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.839 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.839 11:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.839 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.839 11:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.839 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.099 nvme0n1 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.099 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.358 nvme0n1 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.358 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.359 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.617 nvme0n1 00:36:35.617 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.617 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.617 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.617 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.617 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.617 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.876 11:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.876 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.136 nvme0n1 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.136 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.395 nvme0n1 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.395 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.396 11:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.396 11:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.396 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.396 11:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.964 nvme0n1 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.964 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:36.965 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.224 nvme0n1 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.224 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.793 nvme0n1 00:36:37.793 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.793 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.793 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.793 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.793 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.793 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.793 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.794 11:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.053 nvme0n1 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.053 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.312 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.571 nvme0n1 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:38.571 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:38.572 11:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.141 nvme0n1 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.141 11:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.078 nvme0n1 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.078 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.648 nvme0n1 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.648 11:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.216 nvme0n1 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.216 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.217 11:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.785 nvme0n1 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.785 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.044 nvme0n1 00:36:42.044 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.044 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.044 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.044 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.044 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.044 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.044 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.045 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.304 nvme0n1 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.304 nvme0n1 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.304 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.563 nvme0n1 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.563 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.822 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.822 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.823 11:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.823 nvme0n1 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.823 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.083 nvme0n1 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.083 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.343 nvme0n1 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.343 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.602 nvme0n1 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.602 11:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.861 nvme0n1 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.861 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.121 nvme0n1 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.121 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.380 nvme0n1 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.380 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.381 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.381 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.381 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.381 11:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.381 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:44.381 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.381 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.639 nvme0n1 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:44.639 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:44.898 11:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.898 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.899 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.159 nvme0n1 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.159 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.416 nvme0n1 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.416 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.417 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.673 nvme0n1 00:36:45.673 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.673 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.673 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.673 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.673 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:45.674 11:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.239 nvme0n1 00:36:46.239 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.239 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.239 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.239 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.239 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.239 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.240 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.498 nvme0n1 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.498 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.765 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:46.766 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:46.766 11:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:46.766 11:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:46.766 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.766 11:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.033 nvme0n1 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.033 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.599 nvme0n1 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:47.599 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.600 11:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.858 nvme0n1 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.858 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.116 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.680 nvme0n1 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:48.681 11:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.249 nvme0n1 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.249 11:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.817 nvme0n1 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.817 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.076 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.649 nvme0n1 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.649 11:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.217 nvme0n1 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:51.217 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.218 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.477 nvme0n1 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.477 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.478 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.478 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:51.478 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.478 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.737 nvme0n1 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.737 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.738 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.738 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.738 11:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.738 11:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:51.738 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.738 11:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.997 nvme0n1 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.997 nvme0n1 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.997 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:52.256 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.257 nvme0n1 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.257 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.517 nvme0n1 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:52.517 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:52.776 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.777 11:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.777 nvme0n1 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:52.777 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.036 nvme0n1 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.036 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.295 nvme0n1 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:53.295 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.296 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.556 nvme0n1 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.556 11:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.817 nvme0n1 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.817 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.076 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.335 nvme0n1 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:54.335 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.336 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.595 nvme0n1 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.595 11:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.854 nvme0n1 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.854 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.855 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.114 nvme0n1 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.114 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.373 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.632 nvme0n1 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:55.632 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.633 11:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.202 nvme0n1 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.202 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.461 nvme0n1 00:36:56.461 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.461 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.461 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.461 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.461 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.461 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.462 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.462 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.462 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.462 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.462 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.462 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.721 11:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.979 nvme0n1 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:56.979 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.980 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.546 nvme0n1 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGY4NjkxMmNiOWRkYTE5MzE0N2FmNzE2NDlmNjAyYTI8beHK: 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmQ5YjU1YjBiOWFkM2Y2ODZmYThlMDdiOTAwZGYxYjgyM2Y0NTNjZTAyODJiYmZlZWExNTQ3N2EwNjFiODA0M2XE4f0=: 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.546 11:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.113 nvme0n1 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.113 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.681 nvme0n1 00:36:58.681 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.681 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.681 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.681 11:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.681 11:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.681 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.681 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.681 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.681 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.681 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGRkNWU0Yzg3NWJkY2NmMGU1MDUzMGZhMjQ0YmJkMDkplqTl: 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: ]] 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y4NTg5MjUyMjBmMDVmZjFkMmVkZTcyODA4YjFlMDBvuKsH: 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:58.940 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.508 nvme0n1 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjBmMzlkZTQ4MThiMTUxNjIwNWI0MTFjNGQwZTcxMzkwMjJiOWU1NmFmOWJmYjlmFl0UyQ==: 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGIxZTFhYWYzYjMxOWU5ZjlhMmQxZTJmYWIzYTMwOGTH+JUC: 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:59.508 11:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.075 nvme0n1 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.075 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEwZGQ5ZmJhYjQzMjA5NDg5ZGZiNDgwNGEyNmJjZDEwNWU5MDZlZjM4ZDE0OTYyZmZjOTBmYjkxY2NkNDAwOHvg1fw=: 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.076 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.642 nvme0n1 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M4ZDU3NjlkOGVjZGFjYzM5ZDc3ZGZkNjE5MTgyOWVmN2EzMTc3YTNmZTA2ZmNkWby1qw==: 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjg1ZTQ5MjcxYjE2MzI0M2UxZmNiNmEyM2ZkNjFjM2EyZTY5NTZlNzhmMDY4NWVmhrKXVA==: 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:00.642 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:00.643 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:00.902 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.902 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:00.902 11:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.902 request: 00:37:00.902 { 00:37:00.902 "name": "nvme0", 00:37:00.902 "trtype": "tcp", 00:37:00.902 "traddr": "10.0.0.1", 00:37:00.902 "adrfam": "ipv4", 00:37:00.902 "trsvcid": "4420", 00:37:00.902 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:00.902 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:00.902 "prchk_reftag": false, 00:37:00.902 "prchk_guard": false, 00:37:00.902 "hdgst": false, 00:37:00.902 "ddgst": false, 00:37:00.902 "method": "bdev_nvme_attach_controller", 00:37:00.902 "req_id": 1 00:37:00.902 } 00:37:00.902 Got JSON-RPC error response 00:37:00.902 response: 00:37:00.902 { 00:37:00.902 "code": -5, 00:37:00.902 "message": "Input/output error" 00:37:00.902 } 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.902 request: 00:37:00.902 { 00:37:00.902 "name": "nvme0", 00:37:00.902 "trtype": "tcp", 00:37:00.902 "traddr": "10.0.0.1", 00:37:00.902 "adrfam": "ipv4", 00:37:00.902 "trsvcid": "4420", 00:37:00.902 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:00.902 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:00.902 "prchk_reftag": false, 00:37:00.902 "prchk_guard": false, 00:37:00.902 "hdgst": false, 00:37:00.902 "ddgst": false, 00:37:00.902 "dhchap_key": "key2", 00:37:00.902 "method": "bdev_nvme_attach_controller", 00:37:00.902 "req_id": 1 00:37:00.902 } 00:37:00.902 Got JSON-RPC error response 00:37:00.902 response: 00:37:00.902 { 00:37:00.902 "code": -5, 00:37:00.902 "message": "Input/output error" 00:37:00.902 } 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:00.902 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:00.903 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.178 request: 00:37:01.178 { 00:37:01.178 "name": "nvme0", 00:37:01.178 "trtype": "tcp", 00:37:01.178 "traddr": "10.0.0.1", 00:37:01.178 "adrfam": "ipv4", 00:37:01.178 "trsvcid": "4420", 00:37:01.178 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:01.178 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:01.178 "prchk_reftag": false, 00:37:01.178 "prchk_guard": false, 00:37:01.178 "hdgst": false, 00:37:01.178 "ddgst": false, 00:37:01.178 "dhchap_key": "key1", 00:37:01.178 "dhchap_ctrlr_key": "ckey2", 00:37:01.178 "method": "bdev_nvme_attach_controller", 00:37:01.178 "req_id": 1 00:37:01.178 } 00:37:01.178 Got JSON-RPC error response 00:37:01.178 response: 00:37:01.178 { 00:37:01.178 "code": -5, 00:37:01.178 "message": "Input/output error" 00:37:01.178 } 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:37:01.178 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:01.179 rmmod nvme_tcp 00:37:01.179 rmmod nvme_fabrics 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1150677 ']' 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1150677 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1150677 ']' 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1150677 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1150677 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1150677' 00:37:01.179 killing process with pid 1150677 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1150677 00:37:01.179 11:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1150677 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:02.115 11:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:04.653 11:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:06.559 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:06.559 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:07.508 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:07.508 11:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x9E /tmp/spdk.key-null.V9W /tmp/spdk.key-sha256.ZPj /tmp/spdk.key-sha384.tWC /tmp/spdk.key-sha512.ap3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:07.508 11:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:10.116 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:10.116 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:10.116 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:10.116 00:37:10.116 real 0m49.143s 00:37:10.116 user 0m44.172s 00:37:10.116 sys 0m11.146s 00:37:10.116 11:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:10.116 11:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.116 ************************************ 00:37:10.116 END TEST nvmf_auth_host 00:37:10.116 ************************************ 00:37:10.116 11:42:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:10.116 11:42:56 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:37:10.116 11:42:56 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:10.116 11:42:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:10.116 11:42:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:10.116 11:42:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.116 ************************************ 00:37:10.116 START TEST nvmf_digest 00:37:10.116 ************************************ 00:37:10.116 11:42:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:10.116 * Looking for test storage... 00:37:10.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.376 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:37:10.377 11:42:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:15.654 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:15.654 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:15.654 Found net devices under 0000:86:00.0: cvl_0_0 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:15.654 Found net devices under 0000:86:00.1: cvl_0_1 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:15.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:15.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:37:15.654 00:37:15.654 --- 10.0.0.2 ping statistics --- 00:37:15.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.654 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:15.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:15.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:37:15.654 00:37:15.654 --- 10.0.0.1 ping statistics --- 00:37:15.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:15.654 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:15.654 ************************************ 00:37:15.654 START TEST nvmf_digest_clean 00:37:15.654 ************************************ 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:15.654 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1163739 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1163739 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1163739 ']' 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:15.655 11:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:15.914 [2024-07-12 11:43:02.035808] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:15.914 [2024-07-12 11:43:02.035896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:15.914 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.914 [2024-07-12 11:43:02.145029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.173 [2024-07-12 11:43:02.363111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.173 [2024-07-12 11:43:02.363147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.173 [2024-07-12 11:43:02.363160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.173 [2024-07-12 11:43:02.363172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.173 [2024-07-12 11:43:02.363182] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.173 [2024-07-12 11:43:02.363217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.741 11:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.999 null0 00:37:16.999 [2024-07-12 11:43:03.208975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.999 [2024-07-12 11:43:03.233160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.999 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.999 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:16.999 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:16.999 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:16.999 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:16.999 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1163987 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1163987 /var/tmp/bperf.sock 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1163987 ']' 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:17.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:17.000 11:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:17.000 [2024-07-12 11:43:03.305741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:17.000 [2024-07-12 11:43:03.305829] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163987 ] 00:37:17.257 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.258 [2024-07-12 11:43:03.409517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.516 [2024-07-12 11:43:03.633108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.774 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:17.774 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:17.774 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:17.774 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:17.774 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:18.343 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.343 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.601 nvme0n1 00:37:18.601 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:18.601 11:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.860 Running I/O for 2 seconds... 00:37:20.765 00:37:20.765 Latency(us) 00:37:20.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.765 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:20.765 nvme0n1 : 2.00 21684.51 84.71 0.00 0.00 5896.94 3034.60 16754.42 00:37:20.765 =================================================================================================================== 00:37:20.765 Total : 21684.51 84.71 0.00 0.00 5896.94 3034.60 16754.42 00:37:20.765 0 00:37:20.765 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:20.765 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:20.765 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:20.765 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:20.765 | select(.opcode=="crc32c") 00:37:20.765 | "\(.module_name) \(.executed)"' 00:37:20.765 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1163987 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1163987 ']' 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1163987 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163987 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163987' 00:37:21.023 killing process with pid 1163987 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1163987 00:37:21.023 Received shutdown signal, test time was about 2.000000 seconds 00:37:21.023 00:37:21.023 Latency(us) 00:37:21.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.023 =================================================================================================================== 00:37:21.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.023 11:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1163987 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1164797 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1164797 /var/tmp/bperf.sock 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1164797 ']' 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:21.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:21.957 11:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:22.214 [2024-07-12 11:43:08.377012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:22.214 [2024-07-12 11:43:08.377109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164797 ] 00:37:22.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:22.214 Zero copy mechanism will not be used. 00:37:22.214 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.214 [2024-07-12 11:43:08.481777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.471 [2024-07-12 11:43:08.704482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.035 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:23.035 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:23.035 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:23.035 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:23.035 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:23.599 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:23.599 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:23.599 nvme0n1 00:37:23.857 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:23.857 11:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:23.857 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:23.857 Zero copy mechanism will not be used. 00:37:23.857 Running I/O for 2 seconds... 00:37:25.759 00:37:25.759 Latency(us) 00:37:25.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.759 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:25.759 nvme0n1 : 2.00 5174.15 646.77 0.00 0.00 3089.50 712.35 10029.86 00:37:25.759 =================================================================================================================== 00:37:25.759 Total : 5174.15 646.77 0.00 0.00 3089.50 712.35 10029.86 00:37:25.759 0 00:37:25.759 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:25.759 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:25.759 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:25.759 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:25.759 | select(.opcode=="crc32c") 00:37:25.759 | "\(.module_name) \(.executed)"' 00:37:25.759 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1164797 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1164797 ']' 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1164797 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1164797 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1164797' 00:37:26.018 killing process with pid 1164797 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1164797 00:37:26.018 Received shutdown signal, test time was about 2.000000 seconds 00:37:26.018 00:37:26.018 Latency(us) 00:37:26.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.018 =================================================================================================================== 00:37:26.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:26.018 11:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1164797 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:27.394 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1165611 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1165611 /var/tmp/bperf.sock 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1165611 ']' 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:27.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:27.395 11:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:27.395 [2024-07-12 11:43:13.414273] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:27.395 [2024-07-12 11:43:13.414371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165611 ] 00:37:27.395 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.395 [2024-07-12 11:43:13.516116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.395 [2024-07-12 11:43:13.731676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.962 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:27.962 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:27.962 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:27.962 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:27.962 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:28.531 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:28.531 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:28.791 nvme0n1 00:37:28.791 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:28.791 11:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:28.791 Running I/O for 2 seconds... 00:37:31.326 00:37:31.326 Latency(us) 00:37:31.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.326 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.326 nvme0n1 : 2.00 24431.08 95.43 0.00 0.00 5234.41 2635.69 11340.58 00:37:31.326 =================================================================================================================== 00:37:31.326 Total : 24431.08 95.43 0.00 0.00 5234.41 2635.69 11340.58 00:37:31.326 0 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:31.326 | select(.opcode=="crc32c") 00:37:31.326 | "\(.module_name) \(.executed)"' 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1165611 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1165611 ']' 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1165611 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1165611 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1165611' 00:37:31.326 killing process with pid 1165611 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1165611 00:37:31.326 Received shutdown signal, test time was about 2.000000 seconds 00:37:31.326 00:37:31.326 Latency(us) 00:37:31.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.326 =================================================================================================================== 00:37:31.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:31.326 11:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1165611 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1166405 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1166405 /var/tmp/bperf.sock 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1166405 ']' 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:32.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:32.263 11:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:32.263 [2024-07-12 11:43:18.452431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:32.263 [2024-07-12 11:43:18.452544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166405 ] 00:37:32.263 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:32.263 Zero copy mechanism will not be used. 00:37:32.263 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.263 [2024-07-12 11:43:18.558160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.522 [2024-07-12 11:43:18.790324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.090 11:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:33.090 11:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:37:33.090 11:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:33.090 11:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:33.090 11:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:33.659 11:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.659 11:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.963 nvme0n1 00:37:33.963 11:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:33.963 11:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:33.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:33.963 Zero copy mechanism will not be used. 00:37:33.963 Running I/O for 2 seconds... 00:37:36.508 00:37:36.508 Latency(us) 00:37:36.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.508 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:36.508 nvme0n1 : 2.00 6358.48 794.81 0.00 0.00 2511.44 1866.35 12537.32 00:37:36.508 =================================================================================================================== 00:37:36.508 Total : 6358.48 794.81 0.00 0.00 2511.44 1866.35 12537.32 00:37:36.508 0 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:36.508 | select(.opcode=="crc32c") 00:37:36.508 | "\(.module_name) \(.executed)"' 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1166405 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1166405 ']' 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1166405 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1166405 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1166405' 00:37:36.508 killing process with pid 1166405 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1166405 00:37:36.508 Received shutdown signal, test time was about 2.000000 seconds 00:37:36.508 00:37:36.508 Latency(us) 00:37:36.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.508 =================================================================================================================== 00:37:36.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.508 11:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1166405 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1163739 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1163739 ']' 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1163739 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163739 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163739' 00:37:37.445 killing process with pid 1163739 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1163739 00:37:37.445 11:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1163739 00:37:38.824 00:37:38.824 real 0m22.880s 00:37:38.824 user 0m42.469s 00:37:38.824 sys 0m4.922s 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:38.824 ************************************ 00:37:38.824 END TEST nvmf_digest_clean 00:37:38.824 ************************************ 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:38.824 ************************************ 00:37:38.824 START TEST nvmf_digest_error 00:37:38.824 ************************************ 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1167490 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1167490 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1167490 ']' 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:38.824 11:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.824 [2024-07-12 11:43:24.984550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:38.824 [2024-07-12 11:43:24.984652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.824 EAL: No free 2048 kB hugepages reported on node 1 00:37:38.824 [2024-07-12 11:43:25.091367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.084 [2024-07-12 11:43:25.298880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.084 [2024-07-12 11:43:25.298928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.084 [2024-07-12 11:43:25.298941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.084 [2024-07-12 11:43:25.298952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.084 [2024-07-12 11:43:25.298962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.084 [2024-07-12 11:43:25.298990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.654 [2024-07-12 11:43:25.800823] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.654 11:43:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.913 null0 00:37:39.913 [2024-07-12 11:43:26.180632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.913 [2024-07-12 11:43:26.204819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1167741 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1167741 /var/tmp/bperf.sock 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1167741 ']' 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:39.913 11:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:40.172 [2024-07-12 11:43:26.279654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:40.172 [2024-07-12 11:43:26.279744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167741 ] 00:37:40.172 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.172 [2024-07-12 11:43:26.383074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.430 [2024-07-12 11:43:26.604742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:40.998 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:41.567 nvme0n1 00:37:41.567 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:41.567 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.567 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.567 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.567 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:41.567 11:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:41.567 Running I/O for 2 seconds... 00:37:41.567 [2024-07-12 11:43:27.781626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.781672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.781687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.796009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.796043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.796057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.809620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.809650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.809663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.821745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.821774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.821786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.835307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.835337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.835350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.845619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.845647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.845659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.859682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.859709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.859721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.873990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.874019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.874030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.889008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.889037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.889058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.901665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.901694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.901707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.567 [2024-07-12 11:43:27.916347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.567 [2024-07-12 11:43:27.916375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.567 [2024-07-12 11:43:27.916394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:27.926195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:27.926223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:27.926235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:27.940259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:27.940287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:27.940299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:27.955806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:27.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:27.955847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:27.969428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:27.969459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:27.969472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:27.982990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:27.983020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:27.983032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:27.993101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:27.993129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:27.993141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.007752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.007781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.007793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.021376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.021410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.021422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.034507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.034535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.034547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.045473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.045501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.045513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.057922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.057950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.057962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.070548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.070578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.070590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.080193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.080221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.080233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.093070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.093098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.093111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.103786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.103814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.103827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.116445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.116473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.116485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.128113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.128141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.128153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.138017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.138044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.138057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.148899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.148927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.148939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.159383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.159410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.159422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.170533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.170560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.170576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.828 [2024-07-12 11:43:28.181165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:41.828 [2024-07-12 11:43:28.181192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.828 [2024-07-12 11:43:28.181204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.191967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.191995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.192008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.202341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.202369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.202386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.213498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.213527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.213539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.222907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.222935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.222948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.236397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.236425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.236437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.246521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.246548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.246560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.259136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.259166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.259178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.272487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.272514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.272526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.282805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.282833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.282845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.296314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.296343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.296356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.305985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.306013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.306025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.319484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.319514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.319526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.329084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.329113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.329125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.341310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.341338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.341350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.355922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.355952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.355964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.364895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.364925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.364940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.378317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.378345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.378358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.391425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.391454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.391466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.401045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.401081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.401092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.415151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.415181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.088 [2024-07-12 11:43:28.415193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.088 [2024-07-12 11:43:28.430511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.088 [2024-07-12 11:43:28.430540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.089 [2024-07-12 11:43:28.430551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.089 [2024-07-12 11:43:28.439712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.089 [2024-07-12 11:43:28.439740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.089 [2024-07-12 11:43:28.439752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.348 [2024-07-12 11:43:28.451464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.348 [2024-07-12 11:43:28.451494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.348 [2024-07-12 11:43:28.451506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.348 [2024-07-12 11:43:28.462615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.348 [2024-07-12 11:43:28.462644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.348 [2024-07-12 11:43:28.462657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.348 [2024-07-12 11:43:28.473280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.348 [2024-07-12 11:43:28.473309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.348 [2024-07-12 11:43:28.473321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.348 [2024-07-12 11:43:28.485003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.348 [2024-07-12 11:43:28.485032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.348 [2024-07-12 11:43:28.485045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.348 [2024-07-12 11:43:28.496234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.348 [2024-07-12 11:43:28.496263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.348 [2024-07-12 11:43:28.496277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.348 [2024-07-12 11:43:28.506719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.506747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.506760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.516488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.516517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.516529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.530527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.530560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.530572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.541873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.541902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.541914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.552122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.552151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.552163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.563846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.563876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.563896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.574019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.574049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.574063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.584419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.584446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.584458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.596617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.596647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.596659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.606023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.606051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.606063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.618021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.618049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.618063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.631017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.631046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.631058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.641547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.641575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.641587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.654460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.654487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.654504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.664221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.664248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.664260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.678407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.678435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.678448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.689796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.689824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.689836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.349 [2024-07-12 11:43:28.699282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.349 [2024-07-12 11:43:28.699310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.349 [2024-07-12 11:43:28.699323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.711529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.711557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.711569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.724637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.724666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.724678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.734148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.734177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.734190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.749257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.749286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.749298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.759026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.759053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.759068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.773120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.773147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.773159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.787364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.787399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.787411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.799947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.799976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.799988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.810158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.810185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.810197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.822747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.822774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.608 [2024-07-12 11:43:28.822786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.608 [2024-07-12 11:43:28.832909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.608 [2024-07-12 11:43:28.832938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.832950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.844609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.844636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.844648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.856113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.856141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.856153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.866652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.866680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.866691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.877532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.877560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.877573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.888715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.888743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.888755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.899830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.899857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.899869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.910197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.910225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.910237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.919898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.919926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.919938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.931945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.931974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.931986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.942961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.942989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.943001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.953066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.953094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.953109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.609 [2024-07-12 11:43:28.963267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.609 [2024-07-12 11:43:28.963294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.609 [2024-07-12 11:43:28.963307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:28.974179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:28.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:28.974220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:28.985358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:28.985392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:28.985406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:28.995609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:28.995636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:28.995648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.006471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.006497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.006509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.019277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.019304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.019316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.029357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.029389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.029401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.040009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.040038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.040051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.051092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.051121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.051133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.061317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.061345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.061358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.074011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.074039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.074051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.084811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.084840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.084852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.093854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.093882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.093894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.105341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.105369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.105387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.117062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.117089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.117102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.127533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.127562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.127574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.137889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.137917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.137937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.148402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.148431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.148442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.159870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.159898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.159910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.171087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.171113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.171126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.182808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.182836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.182848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.191918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.191945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.191957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.204018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.204046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.204058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.868 [2024-07-12 11:43:29.214050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:42.868 [2024-07-12 11:43:29.214077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.868 [2024-07-12 11:43:29.214090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.226461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.226490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.226503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.236672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.236704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.236717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.249984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.250013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.250025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.264290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.264317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.264329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.278661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.278689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.278701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.292073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.292101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.292113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.301744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.301771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.301783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.316730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.316758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.316770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.329448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.329476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.329488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.339550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.339577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.339592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.354235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.354264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.354276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.363535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.363563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.363575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.377590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.377618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.377636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.390606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.390633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.390645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.400388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.400415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.400427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.414833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.414862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.414874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.424440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.424467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.424480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.438120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.438148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.438160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.449070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.449101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.449113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.460134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.460162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.460174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.472598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.472626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.472639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.128 [2024-07-12 11:43:29.483691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.128 [2024-07-12 11:43:29.483718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.128 [2024-07-12 11:43:29.483730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.494274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.494303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.494315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.505239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.505268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.505281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.518164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.518191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.518203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.528645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.528672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.528684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.542592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.542620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.542635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.554206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.554234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.554246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.563925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.563952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.563965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.575955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.575982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.575994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.588486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.588513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.588526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.598899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.598927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.598939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.613342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.613371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.613388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.627853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.627880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.627892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.640776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.640804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.640816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.651500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.651532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.651545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.663726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.663754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.663766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.675103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.675132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.675144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.684769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.684797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.684809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.696020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.696049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.388 [2024-07-12 11:43:29.696061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.388 [2024-07-12 11:43:29.706569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.388 [2024-07-12 11:43:29.706597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.389 [2024-07-12 11:43:29.706609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.389 [2024-07-12 11:43:29.718858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.389 [2024-07-12 11:43:29.718887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.389 [2024-07-12 11:43:29.718899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.389 [2024-07-12 11:43:29.733670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.389 [2024-07-12 11:43:29.733699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.389 [2024-07-12 11:43:29.733711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.389 [2024-07-12 11:43:29.742956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.389 [2024-07-12 11:43:29.742984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.389 [2024-07-12 11:43:29.743000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.648 [2024-07-12 11:43:29.756534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.648 [2024-07-12 11:43:29.756563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.648 [2024-07-12 11:43:29.756575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.648 [2024-07-12 11:43:29.770310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:43.648 [2024-07-12 11:43:29.770340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.648 [2024-07-12 11:43:29.770353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.648 00:37:43.648 Latency(us) 00:37:43.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.648 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:43.648 nvme0n1 : 2.00 21637.16 84.52 0.00 0.00 5908.82 3134.33 19603.81 00:37:43.648 =================================================================================================================== 00:37:43.648 Total : 21637.16 84.52 0.00 0.00 5908.82 3134.33 19603.81 00:37:43.648 0 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:43.648 | .driver_specific 00:37:43.648 | .nvme_error 00:37:43.648 | .status_code 00:37:43.648 | .command_transient_transport_error' 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1167741 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1167741 ']' 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1167741 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:43.648 11:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1167741 00:37:43.907 11:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:43.907 11:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:43.907 11:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1167741' 00:37:43.907 killing process with pid 1167741 00:37:43.907 11:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1167741 00:37:43.907 Received shutdown signal, test time was about 2.000000 seconds 00:37:43.907 00:37:43.907 Latency(us) 00:37:43.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.907 =================================================================================================================== 00:37:43.907 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:43.907 11:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1167741 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1168445 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1168445 /var/tmp/bperf.sock 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1168445 ']' 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:44.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:44.842 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:44.842 [2024-07-12 11:43:31.133709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:44.842 [2024-07-12 11:43:31.133810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168445 ] 00:37:44.842 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:44.842 Zero copy mechanism will not be used. 00:37:44.842 EAL: No free 2048 kB hugepages reported on node 1 00:37:45.102 [2024-07-12 11:43:31.236016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.102 [2024-07-12 11:43:31.458323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.671 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:45.671 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:45.671 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.671 11:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.931 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:45.931 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.931 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.931 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.931 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.931 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:46.190 nvme0n1 00:37:46.190 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:46.190 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.190 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:46.190 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.190 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:46.190 11:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:46.451 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:46.451 Zero copy mechanism will not be used. 00:37:46.451 Running I/O for 2 seconds... 00:37:46.451 [2024-07-12 11:43:32.574572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.574618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.574634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.582472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.582506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.582521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.589629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.589659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.589672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.596415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.596446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.596458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.603068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.603098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.609797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.609825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.609838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.616243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.616272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.616284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.622779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.622808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.622820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.628829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.628858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.628877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.636331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.636360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.636372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.643493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.643522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.643534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.650337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.650366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.650385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.656861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.656890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.656902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.663400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.663428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.663440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.669986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.670015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.670027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.676564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.676593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.676609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.683101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.683131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.683145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.689166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.689195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.689207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.695472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.695501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.695513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.701781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.701811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.701823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.708046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.708076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.708088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.714465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.714493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.714506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.720808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.720836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.720848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.727224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.727252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.727264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.733757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.451 [2024-07-12 11:43:32.733790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.451 [2024-07-12 11:43:32.733803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.451 [2024-07-12 11:43:32.740218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.740246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.740259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.746647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.746675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.746686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.753152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.753180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.753193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.759301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.759329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.759341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.765475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.765503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.765515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.771737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.771765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.771777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.777940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.777968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.777980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.784351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.784386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.784403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.790873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.790902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.790914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.797467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.797496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.797508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.452 [2024-07-12 11:43:32.804226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.452 [2024-07-12 11:43:32.804255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.452 [2024-07-12 11:43:32.804267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.810981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.811010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.811023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.817640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.817669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.817681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.824186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.824215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.824229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.830390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.830419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.830431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.836832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.836863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.836876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.842928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.842961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.842974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.848989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.849017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.849029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.855172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.855200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.855212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.861540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.861568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.861580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.867929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.867959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.867971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.874547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.874575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.874587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.881175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.881205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.881216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.887838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.887867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.887879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.894418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.894462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.900712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.900740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.900752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.907426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.907464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.907476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.915009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.915039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.915051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.922881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.922911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.922923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.931294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.931324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.931337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.940560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.713 [2024-07-12 11:43:32.940590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.713 [2024-07-12 11:43:32.940603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.713 [2024-07-12 11:43:32.948893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:32.948922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:32.948935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:32.956624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:32.956654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:32.956666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:32.964373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:32.964416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:32.964428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:32.972444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:32.972474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:32.972486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:32.980709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:32.980738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:32.980751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:32.988093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:32.988122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:32.988135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:32.995539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:32.995568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:32.995580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.002394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.002423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.002436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.009086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.009115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.009127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.015787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.015815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.015827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.022192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.022221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.022233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.028724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.028752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.028764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.035601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.035630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.035641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.042507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.042536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.042548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.049374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.049409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.049421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.055830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.055858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.055870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.714 [2024-07-12 11:43:33.062972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.714 [2024-07-12 11:43:33.063002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.714 [2024-07-12 11:43:33.063015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.070760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.070791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.070804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.077983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.078011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.078024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.085421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.085456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.085469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.093806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.093838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.093850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.101788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.101818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.101830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.109900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.109929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.109941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.117975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.118004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.118017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.126044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.126072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.126085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.130192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.130220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.130231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.137791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.137819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.137831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.144447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.144475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.144487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.151278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.151306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.151319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.158043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.158073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.158086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.166008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.166038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.166051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.173981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.174011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.174023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.181764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.181792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.181805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.190370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.190407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.190419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.198614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.198644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.198656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.206292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.206321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.206334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.214126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.214159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.214172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.222307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.222336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.222350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.230072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.230101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.230112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.237494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.237522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.237535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.245751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.245779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.245791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.253610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.253638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.253651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.261550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.261579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.261592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.269352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.269387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.269400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.276665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.276694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.276706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.283789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.283818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.283830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.291858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.291888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.291900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.300215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.300245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.300257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.307845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.307875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.307887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.316175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.316204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.316216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:46.974 [2024-07-12 11:43:33.324307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:46.974 [2024-07-12 11:43:33.324336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:46.974 [2024-07-12 11:43:33.324348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.331654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.331684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.331697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.339519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.339548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.339561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.347659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.347689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.347707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.355290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.355319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.355331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.362622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.362651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.362662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.369947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.369975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.369987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.376932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.376959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.376971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.383785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.383813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.383825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.390478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.390505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.390517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.397155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.397183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.397194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.403849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.403877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.403889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.410579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.410608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.410619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.417202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.417230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.417242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.423885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.423913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.423925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.430585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.430613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.430625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.437215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.437243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.437255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.443812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.443838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.443850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.450479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.450506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.450518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.457113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.457140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.457152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.463754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.463781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.463797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.470406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.470434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.470445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.477050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.477076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.477088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.483645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.483672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.483684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.490201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.490228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.490240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.496850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.496877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.496889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.503400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.503427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.503439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.509762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.509790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.509802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.516210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.516236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.516248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.522546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.522574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.522587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.235 [2024-07-12 11:43:33.528936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.235 [2024-07-12 11:43:33.528964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.235 [2024-07-12 11:43:33.528983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.535342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.535369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.535386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.541736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.541764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.541775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.548327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.548354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.548365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.554952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.554979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.554991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.561529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.561568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.568140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.568167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.568179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.574762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.574790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.574805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.581391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.581420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.581432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.236 [2024-07-12 11:43:33.588063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.236 [2024-07-12 11:43:33.588090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.236 [2024-07-12 11:43:33.588102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.495 [2024-07-12 11:43:33.594698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.495 [2024-07-12 11:43:33.594726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.594738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.601436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.601464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.601476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.608259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.608287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.608299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.615149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.615177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.615189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.621911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.621939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.621950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.628538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.628566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.628578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.635170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.635198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.635209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.641861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.641892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.641904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.648339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.648367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.648385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.654699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.654727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.654740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.660394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.660423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.660435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.666371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.666405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.666417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.672388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.672415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.672427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.678531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.678559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.678570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.684714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.684742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.684757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.690957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.690984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.690996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.696357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.696391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.696403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.699784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.699812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.699824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.705917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.705943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.705955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.711866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.711892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.711904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.717924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.717951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.717964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.723947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.723974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.723986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.730316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.730343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.730355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.736789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.736822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.736834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.743392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.743419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.743431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.749710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.749737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.749749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.755543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.755571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.755584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.761944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.761973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.761985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.768266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.496 [2024-07-12 11:43:33.768295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.496 [2024-07-12 11:43:33.768307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.496 [2024-07-12 11:43:33.773680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.773708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.773720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.779529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.779558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.779570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.785546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.785573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.785589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.791522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.791550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.791562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.797603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.797638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.797649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.803757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.803785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.803797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.810172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.810201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.810212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.815795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.815822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.815834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.819485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.819511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.819524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.825648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.825675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.825687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.831647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.831674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.831686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.837736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.837766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.837778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.844097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.844124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.844136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.497 [2024-07-12 11:43:33.850447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.497 [2024-07-12 11:43:33.850473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.497 [2024-07-12 11:43:33.850486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.756 [2024-07-12 11:43:33.856839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.756 [2024-07-12 11:43:33.856866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.756 [2024-07-12 11:43:33.856879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.756 [2024-07-12 11:43:33.863248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.756 [2024-07-12 11:43:33.863274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.756 [2024-07-12 11:43:33.863286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.756 [2024-07-12 11:43:33.869680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.869706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.869718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.876052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.876078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.876090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.882464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.882490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.882502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.888789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.888816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.888828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.895199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.895226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.895238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.902494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.902522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.902535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.911428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.911457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.911470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.918067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.918095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.918108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.925641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.925669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.925681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.933299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.933327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.933339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.940355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.940388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.940401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.947001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.947027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.947038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.953747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.953778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.953790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.960789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.960818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.960831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.969095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.969124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.969137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.976874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.976902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.976915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.984074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.984103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.984114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.991215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.991242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.991254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:33.998188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:33.998218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:33.998230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.005191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.005218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.005230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.013281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.013323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.021921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.021951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.021963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.030002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.030033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.030046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.037958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.037989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.038001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.046372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.046407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.046420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.055465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.055493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.055505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.063889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.063918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.063930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.071373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.071409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.071422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.757 [2024-07-12 11:43:34.078438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.757 [2024-07-12 11:43:34.078466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.757 [2024-07-12 11:43:34.078478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.758 [2024-07-12 11:43:34.085325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.758 [2024-07-12 11:43:34.085357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.758 [2024-07-12 11:43:34.085369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:47.758 [2024-07-12 11:43:34.092096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.758 [2024-07-12 11:43:34.092123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.758 [2024-07-12 11:43:34.092135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:47.758 [2024-07-12 11:43:34.098762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.758 [2024-07-12 11:43:34.098792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.758 [2024-07-12 11:43:34.098805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:47.758 [2024-07-12 11:43:34.105587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.758 [2024-07-12 11:43:34.105616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.758 [2024-07-12 11:43:34.105628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:47.758 [2024-07-12 11:43:34.112537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:47.758 [2024-07-12 11:43:34.112565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:47.758 [2024-07-12 11:43:34.112578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.119665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.119694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.119707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.127092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.127121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.127134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.134972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.135002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.135015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.142513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.142543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.142555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.150467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.150496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.150509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.157858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.157889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.157902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.164899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.164927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.164940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.171966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.171995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.172007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.179018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.179048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.179060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.186122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.186152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.186164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.193639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.193668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.193681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.201389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.201419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.201432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.208943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.208972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.208988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.216449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.216479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.216492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.224196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.224228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.224242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.232354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.232395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.232426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.240182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.240213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.240227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.248257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.248289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.248303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.257421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.257451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.257465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.266078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.266110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.266124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.275017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.275050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.275064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.283065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.283097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.283111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.290838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.290868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.019 [2024-07-12 11:43:34.298586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.019 [2024-07-12 11:43:34.298617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.019 [2024-07-12 11:43:34.298630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.306113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.306144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.306158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.313415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.313445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.313459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.320638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.320668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.320681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.328083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.328111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.328124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.335673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.335703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.335717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.343875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.343906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.343923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.351436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.351466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.351480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.359122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.359152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.359167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.366517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.366547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.366560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.020 [2024-07-12 11:43:34.373792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.020 [2024-07-12 11:43:34.373821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.020 [2024-07-12 11:43:34.373834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.380909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.380939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.380952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.387972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.388001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.388013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.395050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.395079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.395091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.401845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.401875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.401887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.408494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.408523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.408535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.415304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.415333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.415353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.422260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.422288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.422300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.429802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.429831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.429843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.436819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.436848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.436860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.443537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.443566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.443579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.447236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.447264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.447276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.454182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.454210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.454222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.460816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.460844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.460860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.467320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.467347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.467359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.473869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.473898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.473910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.480453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.480480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.480492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.487082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.487110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.487121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.493599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.493626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.493638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.499986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.500014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.500025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.506408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.506437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.506448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.512848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.512876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.512887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.280 [2024-07-12 11:43:34.519248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.280 [2024-07-12 11:43:34.519276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.280 [2024-07-12 11:43:34.519288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.281 [2024-07-12 11:43:34.525634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.281 [2024-07-12 11:43:34.525662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.281 [2024-07-12 11:43:34.525673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.281 [2024-07-12 11:43:34.532001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.281 [2024-07-12 11:43:34.532029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.281 [2024-07-12 11:43:34.532040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.281 [2024-07-12 11:43:34.538519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.281 [2024-07-12 11:43:34.538547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.281 [2024-07-12 11:43:34.538559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.281 [2024-07-12 11:43:34.545574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.281 [2024-07-12 11:43:34.545603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.281 [2024-07-12 11:43:34.545615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:48.281 [2024-07-12 11:43:34.553374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.281 [2024-07-12 11:43:34.553409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.281 [2024-07-12 11:43:34.553421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:48.281 [2024-07-12 11:43:34.560770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.281 [2024-07-12 11:43:34.560798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.281 [2024-07-12 11:43:34.560810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:48.281 [2024-07-12 11:43:34.568424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032d780) 00:37:48.281 [2024-07-12 11:43:34.568452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.281 [2024-07-12 11:43:34.568481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:48.281 00:37:48.281 Latency(us) 00:37:48.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.281 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:48.281 nvme0n1 : 2.00 4458.68 557.33 0.00 0.00 3584.76 562.75 9459.98 00:37:48.281 =================================================================================================================== 00:37:48.281 Total : 4458.68 557.33 0.00 0.00 3584.76 562.75 9459.98 00:37:48.281 0 00:37:48.281 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:48.281 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:48.281 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:48.281 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:48.281 | .driver_specific 00:37:48.281 | .nvme_error 00:37:48.281 | .status_code 00:37:48.281 | .command_transient_transport_error' 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 288 > 0 )) 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1168445 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1168445 ']' 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1168445 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1168445 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1168445' 00:37:48.540 killing process with pid 1168445 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1168445 00:37:48.540 Received shutdown signal, test time was about 2.000000 seconds 00:37:48.540 00:37:48.540 Latency(us) 00:37:48.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.540 =================================================================================================================== 00:37:48.540 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:48.540 11:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1168445 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1169315 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1169315 /var/tmp/bperf.sock 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1169315 ']' 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:49.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:49.918 11:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:49.918 [2024-07-12 11:43:35.935288] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:49.918 [2024-07-12 11:43:35.935388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169315 ] 00:37:49.918 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.918 [2024-07-12 11:43:36.037417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.918 [2024-07-12 11:43:36.253000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:50.486 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:50.486 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:50.486 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:50.486 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:50.745 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:50.745 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.745 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:50.745 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.745 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:50.745 11:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:51.004 nvme0n1 00:37:51.004 11:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:51.004 11:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.004 11:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:51.004 11:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.004 11:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:51.004 11:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:51.263 Running I/O for 2 seconds... 00:37:51.263 [2024-07-12 11:43:37.377535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:37:51.263 [2024-07-12 11:43:37.378359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.378406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.387827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:51.263 [2024-07-12 11:43:37.388560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.388592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.399087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:51.263 [2024-07-12 11:43:37.399999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.400026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.410985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:51.263 [2024-07-12 11:43:37.412051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.412077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.421633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:37:51.263 [2024-07-12 11:43:37.422711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.422738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.432211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:37:51.263 [2024-07-12 11:43:37.433266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.433292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.442801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:37:51.263 [2024-07-12 11:43:37.443853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.443879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.453417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:37:51.263 [2024-07-12 11:43:37.454461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.454487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.463954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:37:51.263 [2024-07-12 11:43:37.465013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.465039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.474554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa3a0 00:37:51.263 [2024-07-12 11:43:37.475603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.475629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.485133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:51.263 [2024-07-12 11:43:37.486187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.486213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.495683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:37:51.263 [2024-07-12 11:43:37.496736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.496762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.506145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:37:51.263 [2024-07-12 11:43:37.507193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.507219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.516727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:37:51.263 [2024-07-12 11:43:37.517801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.517827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.527284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:51.263 [2024-07-12 11:43:37.528335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.528361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.537886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:51.263 [2024-07-12 11:43:37.538938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.538964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.548418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:37:51.263 [2024-07-12 11:43:37.549459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.549485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.558963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:37:51.263 [2024-07-12 11:43:37.560025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.560051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.569524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:51.263 [2024-07-12 11:43:37.570571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.570597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.580030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:37:51.263 [2024-07-12 11:43:37.581085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.581111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.590625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6020 00:37:51.263 [2024-07-12 11:43:37.591673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.591698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.601191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:37:51.263 [2024-07-12 11:43:37.602244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.602269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.263 [2024-07-12 11:43:37.611775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:37:51.263 [2024-07-12 11:43:37.612855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.263 [2024-07-12 11:43:37.612881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.622516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:37:51.523 [2024-07-12 11:43:37.623501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.623527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.633384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:37:51.523 [2024-07-12 11:43:37.634578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.634605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.643717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:37:51.523 [2024-07-12 11:43:37.644882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.644907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.654823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:37:51.523 [2024-07-12 11:43:37.656122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.656148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.665868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:37:51.523 [2024-07-12 11:43:37.667303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.667332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.675739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:37:51.523 [2024-07-12 11:43:37.676764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.676790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.686127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:37:51.523 [2024-07-12 11:43:37.687161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.687186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.696668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:37:51.523 [2024-07-12 11:43:37.697705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.697730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.707226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:37:51.523 [2024-07-12 11:43:37.708288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.708313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.717777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:37:51.523 [2024-07-12 11:43:37.718812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.728299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:37:51.523 [2024-07-12 11:43:37.729384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.729408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.738906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:37:51.523 [2024-07-12 11:43:37.739943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.739969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.749357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:37:51.523 [2024-07-12 11:43:37.750394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.750419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.759931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:37:51.523 [2024-07-12 11:43:37.760995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.761021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.770543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:37:51.523 [2024-07-12 11:43:37.771576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.771600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.781085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:37:51.523 [2024-07-12 11:43:37.782118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.782143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.791661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:37:51.523 [2024-07-12 11:43:37.792700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.792725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.802233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:37:51.523 [2024-07-12 11:43:37.803266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.803292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.812659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:37:51.523 [2024-07-12 11:43:37.813711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.813735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.823251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:37:51.523 [2024-07-12 11:43:37.824288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.824314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.833798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebfd0 00:37:51.523 [2024-07-12 11:43:37.834834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.834859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.844344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:37:51.523 [2024-07-12 11:43:37.845382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.845411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.523 [2024-07-12 11:43:37.854937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:37:51.523 [2024-07-12 11:43:37.855971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.523 [2024-07-12 11:43:37.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.524 [2024-07-12 11:43:37.865463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:37:51.524 [2024-07-12 11:43:37.866513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.524 [2024-07-12 11:43:37.866539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.524 [2024-07-12 11:43:37.876017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:37:51.524 [2024-07-12 11:43:37.877068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.524 [2024-07-12 11:43:37.877094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.886704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:37:51.784 [2024-07-12 11:43:37.887704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.887732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.897515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:37:51.784 [2024-07-12 11:43:37.898549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.898575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.908085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e84c0 00:37:51.784 [2024-07-12 11:43:37.909126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.909152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.918627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:37:51.784 [2024-07-12 11:43:37.919659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.919684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.929163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:37:51.784 [2024-07-12 11:43:37.930199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.930224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.939738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:37:51.784 [2024-07-12 11:43:37.940784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.940809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.950214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:37:51.784 [2024-07-12 11:43:37.951248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.960684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec840 00:37:51.784 [2024-07-12 11:43:37.961694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.961720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.971226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:37:51.784 [2024-07-12 11:43:37.972260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.972286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.981746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:37:51.784 [2024-07-12 11:43:37.982792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.982816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:37.992190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:37:51.784 [2024-07-12 11:43:37.993272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:37.993298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.002931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:37:51.784 [2024-07-12 11:43:38.003889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.003914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.013507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0630 00:37:51.784 [2024-07-12 11:43:38.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.024475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9b30 00:37:51.784 [2024-07-12 11:43:38.025576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.025601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.035197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5220 00:37:51.784 [2024-07-12 11:43:38.036289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.036315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.046450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:37:51.784 [2024-07-12 11:43:38.047825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.047851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.056210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:37:51.784 [2024-07-12 11:43:38.057402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.057429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.066353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8d30 00:37:51.784 [2024-07-12 11:43:38.067193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.067218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.076853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:37:51.784 [2024-07-12 11:43:38.077677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.077703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.087428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:37:51.784 [2024-07-12 11:43:38.088241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.088268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.097226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:37:51.784 [2024-07-12 11:43:38.098105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.098131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:51.784 [2024-07-12 11:43:38.108326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:37:51.784 [2024-07-12 11:43:38.109375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.784 [2024-07-12 11:43:38.109407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:51.785 [2024-07-12 11:43:38.119402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:37:51.785 [2024-07-12 11:43:38.120581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.785 [2024-07-12 11:43:38.120611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:51.785 [2024-07-12 11:43:38.130471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:51.785 [2024-07-12 11:43:38.131784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.785 [2024-07-12 11:43:38.131810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.141584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:37:52.045 [2024-07-12 11:43:38.143075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.143102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.152947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:37:52.045 [2024-07-12 11:43:38.154610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.154636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.160735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:37:52.045 [2024-07-12 11:43:38.161497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.161522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.171847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:52.045 [2024-07-12 11:43:38.172729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.172755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.182893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:37:52.045 [2024-07-12 11:43:38.183933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.183958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.193979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:37:52.045 [2024-07-12 11:43:38.195159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.195185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.204973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:37:52.045 [2024-07-12 11:43:38.206304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.206331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.216057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:52.045 [2024-07-12 11:43:38.217529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.217554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.227177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:37:52.045 [2024-07-12 11:43:38.228786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.228813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.238283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:37:52.045 [2024-07-12 11:43:38.240035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.240061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.245770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:37:52.045 [2024-07-12 11:43:38.246544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.246571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.256523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:37:52.045 [2024-07-12 11:43:38.257226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.257254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.267149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:37:52.045 [2024-07-12 11:43:38.267842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.267868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.279103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:37:52.045 [2024-07-12 11:43:38.280334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.280360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.290213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:37:52.045 [2024-07-12 11:43:38.291699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.291724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.301297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:37:52.045 [2024-07-12 11:43:38.302881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.302910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.312418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:52.045 [2024-07-12 11:43:38.314119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.314145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.319869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:37:52.045 [2024-07-12 11:43:38.320637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.045 [2024-07-12 11:43:38.320663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:52.045 [2024-07-12 11:43:38.330623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:37:52.045 [2024-07-12 11:43:38.331304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.046 [2024-07-12 11:43:38.331330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:52.046 [2024-07-12 11:43:38.341579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1430 00:37:52.046 [2024-07-12 11:43:38.342486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.046 [2024-07-12 11:43:38.342512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:52.046 [2024-07-12 11:43:38.352289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:37:52.046 [2024-07-12 11:43:38.353201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.046 [2024-07-12 11:43:38.353226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:52.046 [2024-07-12 11:43:38.362862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:37:52.046 [2024-07-12 11:43:38.363785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.046 [2024-07-12 11:43:38.363810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:52.046 [2024-07-12 11:43:38.373800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:37:52.046 [2024-07-12 11:43:38.374848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.046 [2024-07-12 11:43:38.374875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:52.046 [2024-07-12 11:43:38.384500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:37:52.046 [2024-07-12 11:43:38.385551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.046 [2024-07-12 11:43:38.385576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:52.046 [2024-07-12 11:43:38.395108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7970 00:37:52.046 [2024-07-12 11:43:38.396238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.046 [2024-07-12 11:43:38.396265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.405288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:37:52.306 [2024-07-12 11:43:38.406293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.406319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.416480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:37:52.306 [2024-07-12 11:43:38.417659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.417686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.427591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:37:52.306 [2024-07-12 11:43:38.428900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.428927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.438658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebfd0 00:37:52.306 [2024-07-12 11:43:38.440131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.440158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.449808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:37:52.306 [2024-07-12 11:43:38.451428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.451454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.460729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:37:52.306 [2024-07-12 11:43:38.462355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.462389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.469861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:37:52.306 [2024-07-12 11:43:38.470580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.470606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.480904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3e60 00:37:52.306 [2024-07-12 11:43:38.481717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.481743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.490906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:37:52.306 [2024-07-12 11:43:38.492335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.492361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.499997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:37:52.306 [2024-07-12 11:43:38.500750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.500775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.511136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:37:52.306 [2024-07-12 11:43:38.512005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.512032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.522217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:37:52.306 [2024-07-12 11:43:38.523247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.523274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.533299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3060 00:37:52.306 [2024-07-12 11:43:38.534447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.534473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.544436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:37:52.306 [2024-07-12 11:43:38.545651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.545678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.555148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:37:52.306 [2024-07-12 11:43:38.556457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.556483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.566074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:37:52.306 [2024-07-12 11:43:38.567385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.567413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.575981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:37:52.306 [2024-07-12 11:43:38.577142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.577172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.586146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:37:52.306 [2024-07-12 11:43:38.587242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.597420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:37:52.306 [2024-07-12 11:43:38.598643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.598669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.607317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:37:52.306 [2024-07-12 11:43:38.608205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.608231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.619702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e84c0 00:37:52.306 [2024-07-12 11:43:38.621282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.621309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.630971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7100 00:37:52.306 [2024-07-12 11:43:38.632610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.632636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.638450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:37:52.306 [2024-07-12 11:43:38.639111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.639138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.649165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:37:52.306 [2024-07-12 11:43:38.649856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.649882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:52.306 [2024-07-12 11:43:38.661451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:37:52.306 [2024-07-12 11:43:38.662708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.306 [2024-07-12 11:43:38.662735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.672753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:37:52.566 [2024-07-12 11:43:38.674187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.674214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.683911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:37:52.566 [2024-07-12 11:43:38.685483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.685509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.694999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:37:52.566 [2024-07-12 11:43:38.696721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.696747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.702503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:37:52.566 [2024-07-12 11:43:38.703244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.703271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.712574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:37:52.566 [2024-07-12 11:43:38.713224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.713250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.725181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4140 00:37:52.566 [2024-07-12 11:43:38.726389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.726416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.735050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:37:52.566 [2024-07-12 11:43:38.735838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.735865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.745808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:37:52.566 [2024-07-12 11:43:38.746457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.746483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.756879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:37:52.566 [2024-07-12 11:43:38.757661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.566 [2024-07-12 11:43:38.757691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:52.566 [2024-07-12 11:43:38.767795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:37:52.566 [2024-07-12 11:43:38.768863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.768889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.779587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:37:52.567 [2024-07-12 11:43:38.781208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.781235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.787044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:37:52.567 [2024-07-12 11:43:38.787696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.787723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.799831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:37:52.567 [2024-07-12 11:43:38.801036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.801063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.810766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:37:52.567 [2024-07-12 11:43:38.812189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.812216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.819440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:37:52.567 [2024-07-12 11:43:38.820198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.820224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.830376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:37:52.567 [2024-07-12 11:43:38.831283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.831310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.841056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:37:52.567 [2024-07-12 11:43:38.841962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.841988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.850902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:37:52.567 [2024-07-12 11:43:38.851800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.851827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.862053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:37:52.567 [2024-07-12 11:43:38.863163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.863189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.873127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:37:52.567 [2024-07-12 11:43:38.874391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.874418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.884236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:37:52.567 [2024-07-12 11:43:38.885642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.885668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.895464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:37:52.567 [2024-07-12 11:43:38.897000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.897027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.906550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:37:52.567 [2024-07-12 11:43:38.908278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.908304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:52.567 [2024-07-12 11:43:38.916710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1710 00:37:52.567 [2024-07-12 11:43:38.917881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.567 [2024-07-12 11:43:38.917907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:38.926681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:37:52.827 [2024-07-12 11:43:38.927878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:38.927905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:38.937947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:37:52.827 [2024-07-12 11:43:38.939315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:38.939342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:38.949050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:37:52.827 [2024-07-12 11:43:38.950566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:38.950592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:38.960104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:37:52.827 [2024-07-12 11:43:38.961777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:38.961803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:38.971212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:37:52.827 [2024-07-12 11:43:38.973014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:38.973039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:38.978699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:37:52.827 [2024-07-12 11:43:38.979529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:38.979555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:38.989438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:37:52.827 [2024-07-12 11:43:38.990269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:38.990295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.000025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:37:52.827 [2024-07-12 11:43:39.000775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.000801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.010768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:37:52.827 [2024-07-12 11:43:39.011616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.011642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.021348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:37:52.827 [2024-07-12 11:43:39.022201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.022226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.032341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:37:52.827 [2024-07-12 11:43:39.033337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.033363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.042368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:52.827 [2024-07-12 11:43:39.043360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.043392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.053847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:37:52.827 [2024-07-12 11:43:39.054960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.054987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.064975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:37:52.827 [2024-07-12 11:43:39.066223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.066249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.076216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb480 00:37:52.827 [2024-07-12 11:43:39.077612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.077639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.087325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:37:52.827 [2024-07-12 11:43:39.088855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.088881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.098424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:37:52.827 [2024-07-12 11:43:39.100080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.100106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.109523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:37:52.827 [2024-07-12 11:43:39.111325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.111350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.117013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:37:52.827 [2024-07-12 11:43:39.117824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.117850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.128093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fcdd0 00:37:52.827 [2024-07-12 11:43:39.129102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.129128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.138118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:52.827 [2024-07-12 11:43:39.139117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.139143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.149223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:37:52.827 [2024-07-12 11:43:39.150325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.150351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.160292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee190 00:37:52.827 [2024-07-12 11:43:39.161577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.827 [2024-07-12 11:43:39.161604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:52.827 [2024-07-12 11:43:39.171637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:37:52.827 [2024-07-12 11:43:39.173018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:52.828 [2024-07-12 11:43:39.173044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:52.828 [2024-07-12 11:43:39.182880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:37:53.087 [2024-07-12 11:43:39.184344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.184372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.194031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:37:53.087 [2024-07-12 11:43:39.195705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.195731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.205182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ddc00 00:37:53.087 [2024-07-12 11:43:39.207001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.207027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.213630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:37:53.087 [2024-07-12 11:43:39.214844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.214873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.224793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:37:53.087 [2024-07-12 11:43:39.226176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.226202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.234691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:37:53.087 [2024-07-12 11:43:39.235558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.235585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.245414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:37:53.087 [2024-07-12 11:43:39.246136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.246162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.256494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:37:53.087 [2024-07-12 11:43:39.257354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.257386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.267402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0350 00:37:53.087 [2024-07-12 11:43:39.268565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.268593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.277139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:37:53.087 [2024-07-12 11:43:39.278282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.278309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.288265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:37:53.087 [2024-07-12 11:43:39.289641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.289668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.299351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:37:53.087 [2024-07-12 11:43:39.300855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.300881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.310503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:37:53.087 [2024-07-12 11:43:39.312139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.312164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.321626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:37:53.087 [2024-07-12 11:43:39.323413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.323439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.329093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:37:53.087 [2024-07-12 11:43:39.329910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.329936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.339860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:37:53.087 [2024-07-12 11:43:39.340600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.087 [2024-07-12 11:43:39.340626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:53.087 [2024-07-12 11:43:39.350833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:37:53.088 [2024-07-12 11:43:39.351812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.088 [2024-07-12 11:43:39.351837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:53.088 [2024-07-12 11:43:39.362598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f81e0 00:37:53.088 [2024-07-12 11:43:39.363950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.088 [2024-07-12 11:43:39.363976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:53.088 [2024-07-12 11:43:39.373514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:37:53.088 [2024-07-12 11:43:39.374869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:53.088 [2024-07-12 11:43:39.374895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:53.088 00:37:53.088 Latency(us) 00:37:53.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.088 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.088 nvme0n1 : 2.01 24071.84 94.03 0.00 0.00 5309.94 2478.97 14474.91 00:37:53.088 =================================================================================================================== 00:37:53.088 Total : 24071.84 94.03 0.00 0.00 5309.94 2478.97 14474.91 00:37:53.088 0 00:37:53.088 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:53.088 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:53.088 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:53.088 | .driver_specific 00:37:53.088 | .nvme_error 00:37:53.088 | .status_code 00:37:53.088 | .command_transient_transport_error' 00:37:53.088 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:53.348 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 189 > 0 )) 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1169315 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1169315 ']' 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1169315 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1169315 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1169315' 00:37:53.349 killing process with pid 1169315 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1169315 00:37:53.349 Received shutdown signal, test time was about 2.000000 seconds 00:37:53.349 00:37:53.349 Latency(us) 00:37:53.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.349 =================================================================================================================== 00:37:53.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:53.349 11:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1169315 00:37:54.328 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:54.328 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:54.328 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:54.328 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:54.328 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1170071 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1170071 /var/tmp/bperf.sock 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1170071 ']' 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:54.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:54.591 11:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:54.591 [2024-07-12 11:43:40.754554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:54.591 [2024-07-12 11:43:40.754678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170071 ] 00:37:54.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:54.591 Zero copy mechanism will not be used. 00:37:54.591 EAL: No free 2048 kB hugepages reported on node 1 00:37:54.591 [2024-07-12 11:43:40.856132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.849 [2024-07-12 11:43:41.078594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:55.414 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:55.673 nvme0n1 00:37:55.673 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:55.673 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:55.673 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:55.673 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:55.673 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:55.673 11:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:55.933 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:55.933 Zero copy mechanism will not be used. 00:37:55.933 Running I/O for 2 seconds... 00:37:55.933 [2024-07-12 11:43:42.077542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.077990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.078028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.084291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.084735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.084770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.090486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.090907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.090940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.096723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.097128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.097155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.102883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.103306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.103333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.108716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.109136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.109163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.115429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.115859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.115886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.122560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.122964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.122991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.129630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.130054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.130080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.136008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.136434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.136472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.141723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.142127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.142154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.147979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.148405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.148432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.153791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.154213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.154240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.160291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.160742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.160769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.167369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.167778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.167805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.173307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.173735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.173761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.179261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.179679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.179706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.185304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.933 [2024-07-12 11:43:42.185722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.933 [2024-07-12 11:43:42.185749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.933 [2024-07-12 11:43:42.192192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.192619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.192645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.199110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.199528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.199558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.205752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.206163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.206190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.212564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.212989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.213016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.219439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.219872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.219899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.227075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.227485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.227512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.233493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.233914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.233941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.239327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.239751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.239777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.245514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.245922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.245950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.251302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.251727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.251754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.257223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.257654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.257681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.263290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.263709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.263736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.270389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.270820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.270846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.276968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.277398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.277425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:55.934 [2024-07-12 11:43:42.284243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:55.934 [2024-07-12 11:43:42.284659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:55.934 [2024-07-12 11:43:42.284685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.291676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.292105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.292132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.299432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.299852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.299878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.305257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.305681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.305707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.310909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.311329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.311360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.316632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.317040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.317067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.322795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.323215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.323242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.329917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.330358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.330388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.336505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.336927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.336955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.342789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.343215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.343241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.348741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.349145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.349171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.355553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.355984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.356010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.362256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.362674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.368417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.368850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.368877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.374331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.374792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.374818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.380679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.381105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.381132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.386912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.387303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.387330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.195 [2024-07-12 11:43:42.393730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.195 [2024-07-12 11:43:42.394138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.195 [2024-07-12 11:43:42.394165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.400270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.400718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.400744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.406531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.406947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.406973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.412568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.412976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.413002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.418622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.419048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.419075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.425403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.425822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.425848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.432794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.433213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.433239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.439296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.439722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.439749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.445243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.445664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.445689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.451968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.452387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.452415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.458024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.458448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.458474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.464800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.465219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.465245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.472166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.472582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.472608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.479178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.479608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.479634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.485324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.485751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.485777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.491241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.491686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.491712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.496832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.497248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.497274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.502444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.502879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.502906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.508082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.508504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.508530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.513737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.514143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.514169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.519300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.519734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.519759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.524843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.525258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.525285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.530335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.530752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.530778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.535931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.536338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.536364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.541505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.541950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.541977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.196 [2024-07-12 11:43:42.547267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.196 [2024-07-12 11:43:42.547698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.196 [2024-07-12 11:43:42.547725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.552883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.553305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.553333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.558559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.558992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.559018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.564210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.564624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.569858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.570286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.570312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.575484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.575906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.575937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.580940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.581364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.581396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.586467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.586898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.586924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.592009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.592427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.592454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.597659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.598114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.456 [2024-07-12 11:43:42.598141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.456 [2024-07-12 11:43:42.603592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.456 [2024-07-12 11:43:42.604011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.604038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.609302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.609728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.609755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.614931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.615349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.615375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.620486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.620896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.620922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.626105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.626528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.626554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.631628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.632050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.632075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.637275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.637694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.637719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.642840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.643261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.643287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.648417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.648852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.648878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.654120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.654535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.654562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.660809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.661231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.661258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.667550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.667965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.667992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.674276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.674718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.674749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.681194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.681608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.681636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.688619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.689042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.689069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.696113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.696565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.696593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.703375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.703825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.703852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.710561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.710983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.711010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.717545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.717965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.717991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.724523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.724958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.724985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.731748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.732165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.732191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.739743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.740158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.740184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.747247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.747696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.747724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.754475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.754886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.457 [2024-07-12 11:43:42.754912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.457 [2024-07-12 11:43:42.761496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.457 [2024-07-12 11:43:42.761921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.761947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.458 [2024-07-12 11:43:42.768463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.458 [2024-07-12 11:43:42.768904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.768934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.458 [2024-07-12 11:43:42.775445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.458 [2024-07-12 11:43:42.775853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.775879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.458 [2024-07-12 11:43:42.781479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.458 [2024-07-12 11:43:42.781904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.781929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.458 [2024-07-12 11:43:42.787587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.458 [2024-07-12 11:43:42.788008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.788034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.458 [2024-07-12 11:43:42.794045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.458 [2024-07-12 11:43:42.794477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.794508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.458 [2024-07-12 11:43:42.800732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.458 [2024-07-12 11:43:42.801163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.801189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.458 [2024-07-12 11:43:42.808348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.458 [2024-07-12 11:43:42.808821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.458 [2024-07-12 11:43:42.808848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.815812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.816209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.816235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.822732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.823156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.823183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.829852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.830262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.830288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.836144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.836564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.836590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.842924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.843321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.843348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.849856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.850265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.850292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.857322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.857750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.857776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.864181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.864281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.864307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.871649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.872069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.872097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.878442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.878861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.878889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.884454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.884855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.884882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.890772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.891192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.891218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.896728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.897134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.897161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.902448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.902903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.902929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.908253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.908680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.908710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.913889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.914337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.914364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.919587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.920001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.920027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.925267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.925693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.925720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.930940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.931359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.931393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.936619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.937014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.716 [2024-07-12 11:43:42.937040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.716 [2024-07-12 11:43:42.942191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.716 [2024-07-12 11:43:42.942609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.942636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.947717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.948129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.948156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.953213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.953644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.953670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.958807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.959251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.959278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.964459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.964881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.964907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.970114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.970541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.970567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.975762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.976176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.976201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.981570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.981978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.982004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.988027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.988446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.988475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:42.994751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:42.995165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:42.995192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.001717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.002133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.002159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.008516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.008906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.008937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.015328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.015714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.015740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.022412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.022866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.029214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.029639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.029666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.036356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.036782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.036809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.043481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.043883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.043910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.050967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.051396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.051423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.057814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.058233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.058260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.064767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.065174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.065200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.717 [2024-07-12 11:43:43.071615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.717 [2024-07-12 11:43:43.072048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.717 [2024-07-12 11:43:43.072075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.078621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.976 [2024-07-12 11:43:43.079056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.976 [2024-07-12 11:43:43.079082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.085776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.976 [2024-07-12 11:43:43.086176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.976 [2024-07-12 11:43:43.086203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.091813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.976 [2024-07-12 11:43:43.092247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.976 [2024-07-12 11:43:43.092275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.097640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.976 [2024-07-12 11:43:43.098069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.976 [2024-07-12 11:43:43.098096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.103646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.976 [2024-07-12 11:43:43.104066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.976 [2024-07-12 11:43:43.104093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.109790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.976 [2024-07-12 11:43:43.110227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.976 [2024-07-12 11:43:43.110254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.115576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.976 [2024-07-12 11:43:43.116000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.976 [2024-07-12 11:43:43.116026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.976 [2024-07-12 11:43:43.122360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.122799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.122826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.129098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.129537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.129576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.135140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.135567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.135593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.140839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.141257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.141283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.146952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.147373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.147406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.153675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.154081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.154107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.159948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.160372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.160404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.165984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.166413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.166439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.171990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.172416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.172442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.178954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.179372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.179407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.186314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.186418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.186443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.194092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.194277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.194301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.201874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.202325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.202352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.208255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.208684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.208710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.214395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.214806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.214831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.220364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.220823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.220849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.226143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.226566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.226592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.232549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.232969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.232995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.239247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.239671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.239697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.245165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.245581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.245608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.251090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.251520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.251546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.257077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.257503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.257529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.263056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.263476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.263501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.268586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.268682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.268707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.274962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.275399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.275424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.281570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.281952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.281978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.287730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.977 [2024-07-12 11:43:43.288126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.977 [2024-07-12 11:43:43.288156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.977 [2024-07-12 11:43:43.294133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.978 [2024-07-12 11:43:43.294581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.978 [2024-07-12 11:43:43.294608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.978 [2024-07-12 11:43:43.301854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.978 [2024-07-12 11:43:43.302350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.978 [2024-07-12 11:43:43.302382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:56.978 [2024-07-12 11:43:43.309535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.978 [2024-07-12 11:43:43.309919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.978 [2024-07-12 11:43:43.309946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:56.978 [2024-07-12 11:43:43.315341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.978 [2024-07-12 11:43:43.315731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.978 [2024-07-12 11:43:43.315757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:56.978 [2024-07-12 11:43:43.321528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.978 [2024-07-12 11:43:43.321902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.978 [2024-07-12 11:43:43.321928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:56.978 [2024-07-12 11:43:43.327712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:56.978 [2024-07-12 11:43:43.328097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:56.978 [2024-07-12 11:43:43.328123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.333846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.334250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.334278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.339937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.340349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.340383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.346469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.346861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.346888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.353453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.353884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.353910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.361209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.361600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.361627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.368824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.369270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.369297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.376587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.377018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.377043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.384460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.384843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.384869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.392184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.392607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.392634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.400185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.400608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.400635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.407530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.407917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.407950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.415478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.415959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.415985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.423732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.424127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.424153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.431485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.431869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.431895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.439588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.440067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.440093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.446198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.446560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.446586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.451930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.452302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.452329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.457596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.238 [2024-07-12 11:43:43.457966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.238 [2024-07-12 11:43:43.457992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.238 [2024-07-12 11:43:43.462964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.463317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.463343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.469077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.469468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.469494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.475358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.475736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.475762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.481041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.481410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.481435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.486448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.486804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.486830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.492279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.492699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.492725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.499081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.499481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.499507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.505420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.505790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.505816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.512000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.512488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.512515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.518149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.518526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.518555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.523795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.524162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.524188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.530833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.531214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.531241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.537290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.537655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.537681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.544157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.544578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.544604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.550967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.551339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.551365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.558362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.558802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.558829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.565432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.565803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.565831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.572993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.573414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.573441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.579508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.579908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.584931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.585298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.585324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.239 [2024-07-12 11:43:43.590589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.239 [2024-07-12 11:43:43.590963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.239 [2024-07-12 11:43:43.590989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.499 [2024-07-12 11:43:43.596057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.499 [2024-07-12 11:43:43.596412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.499 [2024-07-12 11:43:43.596440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.499 [2024-07-12 11:43:43.601910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.499 [2024-07-12 11:43:43.602295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.499 [2024-07-12 11:43:43.602322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.499 [2024-07-12 11:43:43.608292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.499 [2024-07-12 11:43:43.608705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.499 [2024-07-12 11:43:43.608731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.499 [2024-07-12 11:43:43.614717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.499 [2024-07-12 11:43:43.615103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.499 [2024-07-12 11:43:43.615129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.499 [2024-07-12 11:43:43.622623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.499 [2024-07-12 11:43:43.622987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.623013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.629601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.630092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.630123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.636880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.637323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.637348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.645145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.645506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.645533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.651768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.652142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.652169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.657130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.657501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.657527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.662390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.662746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.662771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.667638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.668008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.668032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.672887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.673244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.673269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.678098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.678465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.678492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.683426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.683788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.683814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.688750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.689128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.689155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.694577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.694962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.694989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.700419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.700784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.700812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.706018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.706374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.706407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.711750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.712124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.712151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.717995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.718369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.718404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.724352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.724720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.724746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.730104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.730478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.730508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.735987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.736354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.736386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.741979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.742406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.742432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.749115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.749479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.749505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.755026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.755363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.755395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.760584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.760931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.760958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.766236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.766589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.766616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.771657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.772002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.772028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.777415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.777763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.783104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.783460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.783487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.788586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.788926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.500 [2024-07-12 11:43:43.788951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.500 [2024-07-12 11:43:43.794054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.500 [2024-07-12 11:43:43.794388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.794414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.799604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.799952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.799978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.805127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.805478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.805504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.810849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.811201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.811227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.816205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.816559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.816585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.821371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.821706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.821731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.827044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.827362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.827393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.832526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.832840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.832866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.837585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.837890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.837916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.842759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.843054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.843080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.847816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.848115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.848142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.501 [2024-07-12 11:43:43.853039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.501 [2024-07-12 11:43:43.853344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.501 [2024-07-12 11:43:43.853370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.859019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.859322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.859348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.863954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.864242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.864267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.868559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.868867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.868893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.873113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.873431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.873457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.877753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.878053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.878079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.882374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.882703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.882728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.887354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.887669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.887695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.891951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.892250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.892276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.896587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.896871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.896897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.901809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.902108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.902134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.906735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.907039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.907065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.911531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.911823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.911849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.916918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.917220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.917246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.922527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.922827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.922853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.761 [2024-07-12 11:43:43.927763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.761 [2024-07-12 11:43:43.928064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.761 [2024-07-12 11:43:43.928089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.932408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.932701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.932728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.936905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.937208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.937233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.941446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.941739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.941765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.946001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.946310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.946336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.950593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.950896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.950922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.955114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.955406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.955436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.959607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.959907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.959933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.964085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.964389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.964414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.968832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.969127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.969153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.974217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.974528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.978933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.979231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.979256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.983503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.983785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.983811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.988079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.988387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.988413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.992640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.992939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.992964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:43.997142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:43.997437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:43.997462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.001790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.002089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.002115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.006365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.006668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.006693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.010953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.011255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.011281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.015511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.015814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.015840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.020092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.020397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.020422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.024731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.025027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.025053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.029295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.029589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.029615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.033917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.034218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.034248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.038502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.038796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.043291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.043596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.043622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.048018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.048346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.052501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.052802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.052826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.057034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.762 [2024-07-12 11:43:44.057332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.762 [2024-07-12 11:43:44.057358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:57.762 [2024-07-12 11:43:44.061525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.763 [2024-07-12 11:43:44.061821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.763 [2024-07-12 11:43:44.061847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:57.763 [2024-07-12 11:43:44.066011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:37:57.763 [2024-07-12 11:43:44.066305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:57.763 [2024-07-12 11:43:44.066331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:57.763 00:37:57.763 Latency(us) 00:37:57.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.763 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:57.763 nvme0n1 : 2.00 5056.25 632.03 0.00 0.00 3159.40 1759.50 13278.16 00:37:57.763 =================================================================================================================== 00:37:57.763 Total : 5056.25 632.03 0.00 0.00 3159.40 1759.50 13278.16 00:37:57.763 0 00:37:57.763 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:57.763 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:57.763 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:57.763 | .driver_specific 00:37:57.763 | .nvme_error 00:37:57.763 | .status_code 00:37:57.763 | .command_transient_transport_error' 00:37:57.763 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 326 > 0 )) 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1170071 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1170071 ']' 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1170071 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170071 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170071' 00:37:58.021 killing process with pid 1170071 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1170071 00:37:58.021 Received shutdown signal, test time was about 2.000000 seconds 00:37:58.021 00:37:58.021 Latency(us) 00:37:58.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.021 =================================================================================================================== 00:37:58.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:58.021 11:43:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1170071 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1167490 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1167490 ']' 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1167490 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1167490 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1167490' 00:37:59.400 killing process with pid 1167490 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1167490 00:37:59.400 11:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1167490 00:38:00.337 00:38:00.337 real 0m21.791s 00:38:00.337 user 0m40.489s 00:38:00.337 sys 0m4.726s 00:38:00.337 11:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:00.337 11:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:00.337 ************************************ 00:38:00.337 END TEST nvmf_digest_error 00:38:00.337 ************************************ 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:00.596 rmmod nvme_tcp 00:38:00.596 rmmod nvme_fabrics 00:38:00.596 rmmod nvme_keyring 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1167490 ']' 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1167490 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1167490 ']' 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1167490 00:38:00.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1167490) - No such process 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1167490 is not found' 00:38:00.596 Process with pid 1167490 is not found 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:00.596 11:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.502 11:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:02.502 00:38:02.502 real 0m52.451s 00:38:02.502 user 1m24.580s 00:38:02.502 sys 0m13.810s 00:38:02.502 11:43:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:02.502 11:43:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:02.502 ************************************ 00:38:02.502 END TEST nvmf_digest 00:38:02.502 ************************************ 00:38:02.760 11:43:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:02.760 11:43:48 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:38:02.760 11:43:48 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:38:02.760 11:43:48 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:38:02.761 11:43:48 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:02.761 11:43:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:02.761 11:43:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:02.761 11:43:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:02.761 ************************************ 00:38:02.761 START TEST nvmf_bdevperf 00:38:02.761 ************************************ 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:02.761 * Looking for test storage... 00:38:02.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.761 11:43:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:38:02.761 11:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:08.036 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:08.036 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.036 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:08.037 Found net devices under 0000:86:00.0: cvl_0_0 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:08.037 Found net devices under 0000:86:00.1: cvl_0_1 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.037 11:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:08.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:38:08.037 00:38:08.037 --- 10.0.0.2 ping statistics --- 00:38:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.037 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:38:08.037 00:38:08.037 --- 10.0.0.1 ping statistics --- 00:38:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.037 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1174296 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1174296 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1174296 ']' 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:08.037 11:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.037 [2024-07-12 11:43:54.342906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:08.037 [2024-07-12 11:43:54.342994] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.296 EAL: No free 2048 kB hugepages reported on node 1 00:38:08.296 [2024-07-12 11:43:54.452833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:08.555 [2024-07-12 11:43:54.675801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.555 [2024-07-12 11:43:54.675846] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.555 [2024-07-12 11:43:54.675860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.555 [2024-07-12 11:43:54.675868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.555 [2024-07-12 11:43:54.675878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.555 [2024-07-12 11:43:54.676014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.555 [2024-07-12 11:43:54.676098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.555 [2024-07-12 11:43:54.676109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:08.815 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:08.815 [2024-07-12 11:43:55.158752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.075 Malloc0 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.075 [2024-07-12 11:43:55.296322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:09.075 { 00:38:09.075 "params": { 00:38:09.075 "name": "Nvme$subsystem", 00:38:09.075 "trtype": "$TEST_TRANSPORT", 00:38:09.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.075 "adrfam": "ipv4", 00:38:09.075 "trsvcid": "$NVMF_PORT", 00:38:09.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.075 "hdgst": ${hdgst:-false}, 00:38:09.075 "ddgst": ${ddgst:-false} 00:38:09.075 }, 00:38:09.075 "method": "bdev_nvme_attach_controller" 00:38:09.075 } 00:38:09.075 EOF 00:38:09.075 )") 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:09.075 11:43:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:09.075 "params": { 00:38:09.075 "name": "Nvme1", 00:38:09.075 "trtype": "tcp", 00:38:09.075 "traddr": "10.0.0.2", 00:38:09.075 "adrfam": "ipv4", 00:38:09.075 "trsvcid": "4420", 00:38:09.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:09.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:09.075 "hdgst": false, 00:38:09.075 "ddgst": false 00:38:09.075 }, 00:38:09.075 "method": "bdev_nvme_attach_controller" 00:38:09.075 }' 00:38:09.075 [2024-07-12 11:43:55.374616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:09.075 [2024-07-12 11:43:55.374704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174542 ] 00:38:09.075 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.334 [2024-07-12 11:43:55.478023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.594 [2024-07-12 11:43:55.711887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.853 Running I/O for 1 seconds... 00:38:11.231 00:38:11.231 Latency(us) 00:38:11.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:11.231 Verification LBA range: start 0x0 length 0x4000 00:38:11.231 Nvme1n1 : 1.01 9488.19 37.06 0.00 0.00 13434.16 2678.43 14474.91 00:38:11.231 =================================================================================================================== 00:38:11.231 Total : 9488.19 37.06 0.00 0.00 13434.16 2678.43 14474.91 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1175009 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:12.167 { 00:38:12.167 "params": { 00:38:12.167 "name": "Nvme$subsystem", 00:38:12.167 "trtype": "$TEST_TRANSPORT", 00:38:12.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:12.167 "adrfam": "ipv4", 00:38:12.167 "trsvcid": "$NVMF_PORT", 00:38:12.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:12.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:12.167 "hdgst": ${hdgst:-false}, 00:38:12.167 "ddgst": ${ddgst:-false} 00:38:12.167 }, 00:38:12.167 "method": "bdev_nvme_attach_controller" 00:38:12.167 } 00:38:12.167 EOF 00:38:12.167 )") 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:38:12.167 11:43:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:12.167 "params": { 00:38:12.167 "name": "Nvme1", 00:38:12.167 "trtype": "tcp", 00:38:12.167 "traddr": "10.0.0.2", 00:38:12.167 "adrfam": "ipv4", 00:38:12.167 "trsvcid": "4420", 00:38:12.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:12.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:12.168 "hdgst": false, 00:38:12.168 "ddgst": false 00:38:12.168 }, 00:38:12.168 "method": "bdev_nvme_attach_controller" 00:38:12.168 }' 00:38:12.168 [2024-07-12 11:43:58.361479] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:12.168 [2024-07-12 11:43:58.361570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175009 ] 00:38:12.168 EAL: No free 2048 kB hugepages reported on node 1 00:38:12.168 [2024-07-12 11:43:58.462043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.426 [2024-07-12 11:43:58.693672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.993 Running I/O for 15 seconds... 00:38:15.532 11:44:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1174296 00:38:15.532 11:44:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:15.532 [2024-07-12 11:44:01.324981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.532 [2024-07-12 11:44:01.325803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.532 [2024-07-12 11:44:01.325816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.325987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.325996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.533 [2024-07-12 11:44:01.326728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.533 [2024-07-12 11:44:01.326739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.326984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.326993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.534 [2024-07-12 11:44:01.327642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.534 [2024-07-12 11:44:01.327651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:15.535 [2024-07-12 11:44:01.327857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.327868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(5) to be set 00:38:15.535 [2024-07-12 11:44:01.327881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:15.535 [2024-07-12 11:44:01.327889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:15.535 [2024-07-12 11:44:01.327899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28320 len:8 PRP1 0x0 PRP2 0x0 00:38:15.535 [2024-07-12 11:44:01.327910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:15.535 [2024-07-12 11:44:01.328204] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500032da00 was disconnected and freed. reset controller. 00:38:15.535 [2024-07-12 11:44:01.331326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.535 [2024-07-12 11:44:01.331417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.535 [2024-07-12 11:44:01.332066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.535 [2024-07-12 11:44:01.332089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.535 [2024-07-12 11:44:01.332101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.535 [2024-07-12 11:44:01.332300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.535 [2024-07-12 11:44:01.332507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.535 [2024-07-12 11:44:01.332520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.535 [2024-07-12 11:44:01.332533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.535 [2024-07-12 11:44:01.335570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.535 [2024-07-12 11:44:01.345035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.535 [2024-07-12 11:44:01.345504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.535 [2024-07-12 11:44:01.345580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.535 [2024-07-12 11:44:01.345613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.535 [2024-07-12 11:44:01.346261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.535 [2024-07-12 11:44:01.346888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.535 [2024-07-12 11:44:01.346901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.535 [2024-07-12 11:44:01.346911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.535 [2024-07-12 11:44:01.349873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.535 [2024-07-12 11:44:01.358234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.535 [2024-07-12 11:44:01.358697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.535 [2024-07-12 11:44:01.358756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.535 [2024-07-12 11:44:01.358789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.535 [2024-07-12 11:44:01.359454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.535 [2024-07-12 11:44:01.359665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.535 [2024-07-12 11:44:01.359678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.535 [2024-07-12 11:44:01.359688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.535 [2024-07-12 11:44:01.362659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.535 [2024-07-12 11:44:01.371416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.535 [2024-07-12 11:44:01.371851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.535 [2024-07-12 11:44:01.371872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.535 [2024-07-12 11:44:01.371882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.535 [2024-07-12 11:44:01.372067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.535 [2024-07-12 11:44:01.372251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.535 [2024-07-12 11:44:01.372263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.535 [2024-07-12 11:44:01.372275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.535 [2024-07-12 11:44:01.375227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.535 [2024-07-12 11:44:01.384527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.535 [2024-07-12 11:44:01.384991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.535 [2024-07-12 11:44:01.385012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.535 [2024-07-12 11:44:01.385023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.535 [2024-07-12 11:44:01.385207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.535 [2024-07-12 11:44:01.385426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.535 [2024-07-12 11:44:01.385440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.535 [2024-07-12 11:44:01.385450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.535 [2024-07-12 11:44:01.388305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.535 [2024-07-12 11:44:01.397671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.535 [2024-07-12 11:44:01.398118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.535 [2024-07-12 11:44:01.398167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.535 [2024-07-12 11:44:01.398201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.535 [2024-07-12 11:44:01.398824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.535 [2024-07-12 11:44:01.399020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.535 [2024-07-12 11:44:01.399033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.535 [2024-07-12 11:44:01.399043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.535 [2024-07-12 11:44:01.401940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.535 [2024-07-12 11:44:01.410788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.535 [2024-07-12 11:44:01.411213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.535 [2024-07-12 11:44:01.411234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.536 [2024-07-12 11:44:01.411245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.536 [2024-07-12 11:44:01.411452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.536 [2024-07-12 11:44:01.411647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.536 [2024-07-12 11:44:01.411660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.536 [2024-07-12 11:44:01.411669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.536 [2024-07-12 11:44:01.414587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.536 [2024-07-12 11:44:01.423996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.536 [2024-07-12 11:44:01.424451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.536 [2024-07-12 11:44:01.424476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.536 [2024-07-12 11:44:01.424486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.536 [2024-07-12 11:44:01.424669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.536 [2024-07-12 11:44:01.424852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.536 [2024-07-12 11:44:01.424864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.536 [2024-07-12 11:44:01.424873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.536 [2024-07-12 11:44:01.427821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.536 [2024-07-12 11:44:01.437175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.536 [2024-07-12 11:44:01.437644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.536 [2024-07-12 11:44:01.437702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.536 [2024-07-12 11:44:01.437735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.536 [2024-07-12 11:44:01.438236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.536 [2024-07-12 11:44:01.438444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.536 [2024-07-12 11:44:01.438457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.536 [2024-07-12 11:44:01.438466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.536 [2024-07-12 11:44:01.441392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.536 [2024-07-12 11:44:01.450408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.536 [2024-07-12 11:44:01.450793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.536 [2024-07-12 11:44:01.450814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.536 [2024-07-12 11:44:01.450824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.536 [2024-07-12 11:44:01.451007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.536 [2024-07-12 11:44:01.451190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.536 [2024-07-12 11:44:01.451202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.536 [2024-07-12 11:44:01.451211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.536 [2024-07-12 11:44:01.454164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.536 [2024-07-12 11:44:01.463597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.536 [2024-07-12 11:44:01.463992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.536 [2024-07-12 11:44:01.464013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.536 [2024-07-12 11:44:01.464022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.536 [2024-07-12 11:44:01.464205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.536 [2024-07-12 11:44:01.464397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.536 [2024-07-12 11:44:01.464410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.536 [2024-07-12 11:44:01.464419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.536 [2024-07-12 11:44:01.467360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.536 [2024-07-12 11:44:01.476778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.536 [2024-07-12 11:44:01.477211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.536 [2024-07-12 11:44:01.477268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.536 [2024-07-12 11:44:01.477300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.536 [2024-07-12 11:44:01.477902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.536 [2024-07-12 11:44:01.478098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.536 [2024-07-12 11:44:01.478111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.536 [2024-07-12 11:44:01.478120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.536 [2024-07-12 11:44:01.481060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.536 [2024-07-12 11:44:01.489867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.536 [2024-07-12 11:44:01.490221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.536 [2024-07-12 11:44:01.490243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.490253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.490461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.490655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.490669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.490678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.493597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.502994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.503433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.503481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.503664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.503847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.503860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.503871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.506825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.516073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.516529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.516586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.516618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.517003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.517186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.517198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.517207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.520064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.529148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.529602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.529623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.529634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.529824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.530007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.530020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.530028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.532975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.542231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.542700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.542758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.542790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.543304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.543515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.543528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.543538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.546500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.555417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.555790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.555814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.555824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.556006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.556189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.556201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.556210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.559157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.568576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.569042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.569096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.569128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.569789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.570339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.570352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.570361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.573288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.581799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.582225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.582247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.582257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.582475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.582675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.582688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.582698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.585814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.595179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.595651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.537 [2024-07-12 11:44:01.595674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.537 [2024-07-12 11:44:01.595685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.537 [2024-07-12 11:44:01.595888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.537 [2024-07-12 11:44:01.596087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.537 [2024-07-12 11:44:01.596100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.537 [2024-07-12 11:44:01.596110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.537 [2024-07-12 11:44:01.599220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.537 [2024-07-12 11:44:01.608545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.537 [2024-07-12 11:44:01.608957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.608978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.608988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.609171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.609355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.609367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.609381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.612411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.621856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.622252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.622274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.622284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.622485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.622680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.622692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.622703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.625668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.634946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.635415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.635471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.635504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.636110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.636293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.636306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.636319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.639264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.648180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.648581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.648604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.648615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.648811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.648995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.649007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.649016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.651939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.661576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.661964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.661986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.661997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.662197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.662404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.662419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.662429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.665540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.675098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.675431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.675454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.675465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.675665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.675864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.675879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.675889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.679003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.688570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.688879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.688900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.688911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.689109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.689309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.689322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.689332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.692454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.701957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.702352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.702374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.702389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.702591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.702775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.702787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.702796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.705800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.715297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.715660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.538 [2024-07-12 11:44:01.715682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.538 [2024-07-12 11:44:01.715692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.538 [2024-07-12 11:44:01.715875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.538 [2024-07-12 11:44:01.716059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.538 [2024-07-12 11:44:01.716079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.538 [2024-07-12 11:44:01.716088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.538 [2024-07-12 11:44:01.718991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.538 [2024-07-12 11:44:01.728531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.538 [2024-07-12 11:44:01.728967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.728989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.728998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.729185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.729368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.729385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.729395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.732334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.741769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.742121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.742143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.742153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.742346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.742546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.742559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.742568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.745538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.754913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.755230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.755251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.755261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.755466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.755661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.755674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.755684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.758604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.768111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.768431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.768452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.768463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.768656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.768851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.768864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.768877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.771785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.781285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.781778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.781801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.781811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.782004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.782199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.782211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.782220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.785200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.794746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.795129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.795150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.795161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.795354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.795553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.795567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.795576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.798641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.808041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.808518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.808539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.808549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.808733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.808917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.808929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.808938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.811896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.821247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.821690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.539 [2024-07-12 11:44:01.821746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.539 [2024-07-12 11:44:01.821779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.539 [2024-07-12 11:44:01.822173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.539 [2024-07-12 11:44:01.822356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.539 [2024-07-12 11:44:01.822368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.539 [2024-07-12 11:44:01.822385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.539 [2024-07-12 11:44:01.825335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.539 [2024-07-12 11:44:01.834550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.539 [2024-07-12 11:44:01.834942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.540 [2024-07-12 11:44:01.834965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.540 [2024-07-12 11:44:01.834975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.540 [2024-07-12 11:44:01.835169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.540 [2024-07-12 11:44:01.835363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.540 [2024-07-12 11:44:01.835376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.540 [2024-07-12 11:44:01.835393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.540 [2024-07-12 11:44:01.838525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.540 [2024-07-12 11:44:01.848045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.540 [2024-07-12 11:44:01.848512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.540 [2024-07-12 11:44:01.848570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.540 [2024-07-12 11:44:01.848602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.540 [2024-07-12 11:44:01.849105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.540 [2024-07-12 11:44:01.849289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.540 [2024-07-12 11:44:01.849301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.540 [2024-07-12 11:44:01.849310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.540 [2024-07-12 11:44:01.852344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.540 [2024-07-12 11:44:01.861444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.540 [2024-07-12 11:44:01.861895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.540 [2024-07-12 11:44:01.861917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.540 [2024-07-12 11:44:01.861927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.540 [2024-07-12 11:44:01.862113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.540 [2024-07-12 11:44:01.862296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.540 [2024-07-12 11:44:01.862309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.540 [2024-07-12 11:44:01.862318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.540 [2024-07-12 11:44:01.865292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.540 [2024-07-12 11:44:01.874654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.540 [2024-07-12 11:44:01.875054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.540 [2024-07-12 11:44:01.875075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.540 [2024-07-12 11:44:01.875084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.540 [2024-07-12 11:44:01.875267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.540 [2024-07-12 11:44:01.875457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.540 [2024-07-12 11:44:01.875471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.540 [2024-07-12 11:44:01.875480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.540 [2024-07-12 11:44:01.878407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.888115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.888491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.888514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.888524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.888718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.888919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.888932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.888940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.891949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.901503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.901942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.901964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.901975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.902168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.902362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.902381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.902396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.905348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.914680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.915127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.915175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.915208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.915850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.916034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.916045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.916054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.918954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.927793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.928200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.928222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.928237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.928439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.928646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.928658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.928668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.931551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.941087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.941479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.941501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.941511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.941694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.941878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.941890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.941900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.944851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.954398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.954852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.954873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.954883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.955066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.955249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.955262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.955272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.958278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.967564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.967869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.967890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.967900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.968083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.968266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.968278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.968287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.971240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.980805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.802 [2024-07-12 11:44:01.981295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.802 [2024-07-12 11:44:01.981351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.802 [2024-07-12 11:44:01.981413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.802 [2024-07-12 11:44:01.982062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.802 [2024-07-12 11:44:01.982523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.802 [2024-07-12 11:44:01.982536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.802 [2024-07-12 11:44:01.982545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.802 [2024-07-12 11:44:01.985473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.802 [2024-07-12 11:44:01.994114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:01.994498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:01.994521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:01.994531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:01.994730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:01.994913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:01.994925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:01.994935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:01.997926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.007399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.007708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.007776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.007808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.008427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.008622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.008635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.008645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.011563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.020574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.020930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.020952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.020962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.021145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.021329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.021341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.021350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.024300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.033842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.034211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.034234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.034244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.034434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.034620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.034636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.034645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.037584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.047462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.047849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.047872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.047883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.048077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.048270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.048283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.048293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.051359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.060697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.061084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.061107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.061117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.061309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.061510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.061524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.061533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.064402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.074072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.074477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.074500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.074511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.074710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.074909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.074922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.074932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.078044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.087597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.088050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.088072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.088084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.088282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.088486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.088500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.088510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.091618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.803 [2024-07-12 11:44:02.100970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.803 [2024-07-12 11:44:02.101452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.803 [2024-07-12 11:44:02.101510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.803 [2024-07-12 11:44:02.101543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.803 [2024-07-12 11:44:02.102187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.803 [2024-07-12 11:44:02.102605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.803 [2024-07-12 11:44:02.102623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.803 [2024-07-12 11:44:02.102636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.803 [2024-07-12 11:44:02.107093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.804 [2024-07-12 11:44:02.114714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.804 [2024-07-12 11:44:02.115143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.804 [2024-07-12 11:44:02.115165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.804 [2024-07-12 11:44:02.115175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.804 [2024-07-12 11:44:02.115363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.804 [2024-07-12 11:44:02.115578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.804 [2024-07-12 11:44:02.115592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.804 [2024-07-12 11:44:02.115602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.804 [2024-07-12 11:44:02.118596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.804 [2024-07-12 11:44:02.127854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.804 [2024-07-12 11:44:02.128317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.804 [2024-07-12 11:44:02.128358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.804 [2024-07-12 11:44:02.128367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.804 [2024-07-12 11:44:02.128584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.804 [2024-07-12 11:44:02.128777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.804 [2024-07-12 11:44:02.128790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.804 [2024-07-12 11:44:02.128799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.804 [2024-07-12 11:44:02.131704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.804 [2024-07-12 11:44:02.140952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.804 [2024-07-12 11:44:02.141325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.804 [2024-07-12 11:44:02.141345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.804 [2024-07-12 11:44:02.141355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.804 [2024-07-12 11:44:02.141545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.804 [2024-07-12 11:44:02.141730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.804 [2024-07-12 11:44:02.141742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.804 [2024-07-12 11:44:02.141751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:15.804 [2024-07-12 11:44:02.144644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:15.804 [2024-07-12 11:44:02.154193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:15.804 [2024-07-12 11:44:02.154680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:15.804 [2024-07-12 11:44:02.154736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:15.804 [2024-07-12 11:44:02.154768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:15.804 [2024-07-12 11:44:02.155428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:15.804 [2024-07-12 11:44:02.155782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:15.804 [2024-07-12 11:44:02.155795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:15.804 [2024-07-12 11:44:02.155804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.158822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.167401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.167866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.167922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.167953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.065 [2024-07-12 11:44:02.168613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.065 [2024-07-12 11:44:02.169013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.065 [2024-07-12 11:44:02.169030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.065 [2024-07-12 11:44:02.169039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.171933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.180604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.181078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.181136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.181168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.065 [2024-07-12 11:44:02.181658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.065 [2024-07-12 11:44:02.181842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.065 [2024-07-12 11:44:02.181854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.065 [2024-07-12 11:44:02.181864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.184752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.193761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.194204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.194259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.194290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.065 [2024-07-12 11:44:02.194830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.065 [2024-07-12 11:44:02.195025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.065 [2024-07-12 11:44:02.195039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.065 [2024-07-12 11:44:02.195048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.198021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.206871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.207325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.207346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.207355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.065 [2024-07-12 11:44:02.207570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.065 [2024-07-12 11:44:02.207765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.065 [2024-07-12 11:44:02.207778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.065 [2024-07-12 11:44:02.207787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.210750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.220098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.220547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.220617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.220648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.065 [2024-07-12 11:44:02.221292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.065 [2024-07-12 11:44:02.221883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.065 [2024-07-12 11:44:02.221896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.065 [2024-07-12 11:44:02.221907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.224810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.233314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.233777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.233834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.233866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.065 [2024-07-12 11:44:02.234525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.065 [2024-07-12 11:44:02.234993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.065 [2024-07-12 11:44:02.235006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.065 [2024-07-12 11:44:02.235015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.237907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.246511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.246989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.247049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.247081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.065 [2024-07-12 11:44:02.247743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.065 [2024-07-12 11:44:02.248243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.065 [2024-07-12 11:44:02.248256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.065 [2024-07-12 11:44:02.248264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.065 [2024-07-12 11:44:02.251118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.065 [2024-07-12 11:44:02.259627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.065 [2024-07-12 11:44:02.259962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.065 [2024-07-12 11:44:02.260018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.065 [2024-07-12 11:44:02.260057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.260718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.261125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.261138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.261147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.264033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.272698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.273146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.273202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.273235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.273894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.274318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.274330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.274339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.278528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.286621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.287077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.287138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.287185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.287845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.288288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.288300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.288310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.291294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.299803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.300245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.300296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.300329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.300912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.301107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.301123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.301132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.304020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.313020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.313405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.313426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.313437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.313619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.313803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.313815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.313824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.316770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.326107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.326575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.326632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.326663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.327170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.327354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.327366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.327375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.330473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.339456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.339927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.339949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.339960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.340153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.340347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.340360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.340369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.343497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.352842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.353323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.353395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.353430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.353972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.354166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.354178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.354188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.357207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.366084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.366508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.366529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.066 [2024-07-12 11:44:02.366539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.066 [2024-07-12 11:44:02.366722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.066 [2024-07-12 11:44:02.366906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.066 [2024-07-12 11:44:02.366918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.066 [2024-07-12 11:44:02.366927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.066 [2024-07-12 11:44:02.369875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.066 [2024-07-12 11:44:02.379216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.066 [2024-07-12 11:44:02.379672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.066 [2024-07-12 11:44:02.379693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.067 [2024-07-12 11:44:02.379703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.067 [2024-07-12 11:44:02.379886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.067 [2024-07-12 11:44:02.380068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.067 [2024-07-12 11:44:02.380081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.067 [2024-07-12 11:44:02.380090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.067 [2024-07-12 11:44:02.383046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.067 [2024-07-12 11:44:02.392395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.067 [2024-07-12 11:44:02.392854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.067 [2024-07-12 11:44:02.392910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.067 [2024-07-12 11:44:02.392948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.067 [2024-07-12 11:44:02.393361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.067 [2024-07-12 11:44:02.393577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.067 [2024-07-12 11:44:02.393590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.067 [2024-07-12 11:44:02.393600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.067 [2024-07-12 11:44:02.396520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.067 [2024-07-12 11:44:02.405529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.067 [2024-07-12 11:44:02.405906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.067 [2024-07-12 11:44:02.405927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.067 [2024-07-12 11:44:02.405937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.067 [2024-07-12 11:44:02.406120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.067 [2024-07-12 11:44:02.406304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.067 [2024-07-12 11:44:02.406316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.067 [2024-07-12 11:44:02.406325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.067 [2024-07-12 11:44:02.409270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.067 [2024-07-12 11:44:02.418779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.067 [2024-07-12 11:44:02.419249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.067 [2024-07-12 11:44:02.419304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.067 [2024-07-12 11:44:02.419334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.067 [2024-07-12 11:44:02.419993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.067 [2024-07-12 11:44:02.420444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.067 [2024-07-12 11:44:02.420458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.067 [2024-07-12 11:44:02.420466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.423502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.432033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.432483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.432505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.432515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.432698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.432882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.432897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.432906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.435853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.445194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.445657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.445714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.445746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.446258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.446515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.446534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.446547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.450998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.458928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.459338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.459360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.459370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.459585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.459780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.459793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.459803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.462771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.472031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.472405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.472426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.472436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.472619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.472809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.472822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.472830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.475775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.485129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.485578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.485600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.485640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.486218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.486424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.486437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.486447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.489369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.498386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.498847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.498906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.498937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.499503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.499698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.499711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.499720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.502629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.511467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.511910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.511931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.511940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.512123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.512307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.512319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.512328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.515273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.524711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.525111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.525168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.525208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.328 [2024-07-12 11:44:02.525669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.328 [2024-07-12 11:44:02.525862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.328 [2024-07-12 11:44:02.525874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.328 [2024-07-12 11:44:02.525883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.328 [2024-07-12 11:44:02.528779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.328 [2024-07-12 11:44:02.537854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.328 [2024-07-12 11:44:02.538230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.328 [2024-07-12 11:44:02.538251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.328 [2024-07-12 11:44:02.538261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.538468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.538663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.538676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.538685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.541595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.551033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.551520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.551578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.551609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.552254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.552761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.552775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.552783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.555690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.564345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.564777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.564798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.564809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.564992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.565176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.565192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.565201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.568056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.577548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.577989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.578010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.578021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.578214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.578414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.578428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.578438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.581311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.590863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.591257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.591278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.591289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.591487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.591683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.591696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.591705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.594817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.604320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.604782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.604838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.604870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.605529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.605748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.605762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.605771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.608795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.617710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.618166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.618188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.618199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.618400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.618595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.618609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.618618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.621539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.630809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.631264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.631315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.631347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.631887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.632082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.632095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.632105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.634996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.644101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.644478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.644535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.644568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.644797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.644981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.644994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.645002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.647948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.657330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.657715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.657736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.657749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.657934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.658118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.658130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.658145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.661091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.670696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.329 [2024-07-12 11:44:02.671181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.329 [2024-07-12 11:44:02.671240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.329 [2024-07-12 11:44:02.671272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.329 [2024-07-12 11:44:02.671934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.329 [2024-07-12 11:44:02.672279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.329 [2024-07-12 11:44:02.672293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.329 [2024-07-12 11:44:02.672302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.329 [2024-07-12 11:44:02.675370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.329 [2024-07-12 11:44:02.684061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.590 [2024-07-12 11:44:02.684478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.590 [2024-07-12 11:44:02.684501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.590 [2024-07-12 11:44:02.684513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.590 [2024-07-12 11:44:02.684711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.590 [2024-07-12 11:44:02.684912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.590 [2024-07-12 11:44:02.684925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.590 [2024-07-12 11:44:02.684935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.590 [2024-07-12 11:44:02.688021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.590 [2024-07-12 11:44:02.697361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.590 [2024-07-12 11:44:02.697813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.590 [2024-07-12 11:44:02.697871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.590 [2024-07-12 11:44:02.697902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.590 [2024-07-12 11:44:02.698560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.590 [2024-07-12 11:44:02.699136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.590 [2024-07-12 11:44:02.699149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.590 [2024-07-12 11:44:02.699158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.590 [2024-07-12 11:44:02.702176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.590 [2024-07-12 11:44:02.710733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.590 [2024-07-12 11:44:02.711187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.590 [2024-07-12 11:44:02.711235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.590 [2024-07-12 11:44:02.711269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.711929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.712123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.712135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.712144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.715161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.723927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.724370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.724396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.724407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.724590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.724774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.724786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.724795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.727792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.737066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.737536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.737559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.737569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.737752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.737936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.737949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.737958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.740811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.750235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.750693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.750714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.750724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.750906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.751089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.751101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.751110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.754059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.763403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.763847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.763904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.763935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.764596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.765187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.765199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.765208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.768093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.776594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.777046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.777067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.777077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.777259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.777468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.777481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.777492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.780415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.789756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.790196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.790217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.790229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.790436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.790632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.790645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.790654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.793647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.802934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.803324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.803394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.803428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.804072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.804512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.804526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.804535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.807400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.816167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.816597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.816619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.816630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.816823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.817018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.817030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.817040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.820051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.829508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.829954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.830020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.830052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.830587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.830785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.830798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.830807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.833809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.842618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.842997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.843019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.843030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.843230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.843435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.843449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.843460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.846586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.855940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.856411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.856471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.591 [2024-07-12 11:44:02.856504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.591 [2024-07-12 11:44:02.857148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.591 [2024-07-12 11:44:02.857521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.591 [2024-07-12 11:44:02.857535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.591 [2024-07-12 11:44:02.857544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.591 [2024-07-12 11:44:02.860567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.591 [2024-07-12 11:44:02.869153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.591 [2024-07-12 11:44:02.869580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.591 [2024-07-12 11:44:02.869601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.592 [2024-07-12 11:44:02.869612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.592 [2024-07-12 11:44:02.869795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.592 [2024-07-12 11:44:02.869979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.592 [2024-07-12 11:44:02.869991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.592 [2024-07-12 11:44:02.870000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.592 [2024-07-12 11:44:02.872957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.592 [2024-07-12 11:44:02.882428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.592 [2024-07-12 11:44:02.882878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.592 [2024-07-12 11:44:02.882935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.592 [2024-07-12 11:44:02.882968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.592 [2024-07-12 11:44:02.883559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.592 [2024-07-12 11:44:02.883753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.592 [2024-07-12 11:44:02.883766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.592 [2024-07-12 11:44:02.883775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.592 [2024-07-12 11:44:02.886688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.592 [2024-07-12 11:44:02.895611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.592 [2024-07-12 11:44:02.896061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.592 [2024-07-12 11:44:02.896082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.592 [2024-07-12 11:44:02.896091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.592 [2024-07-12 11:44:02.896274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.592 [2024-07-12 11:44:02.896465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.592 [2024-07-12 11:44:02.896479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.592 [2024-07-12 11:44:02.896488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.592 [2024-07-12 11:44:02.899335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.592 [2024-07-12 11:44:02.908760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.592 [2024-07-12 11:44:02.909211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.592 [2024-07-12 11:44:02.909232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.592 [2024-07-12 11:44:02.909241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.592 [2024-07-12 11:44:02.909432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.592 [2024-07-12 11:44:02.909643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.592 [2024-07-12 11:44:02.909656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.592 [2024-07-12 11:44:02.909665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.592 [2024-07-12 11:44:02.912583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.592 [2024-07-12 11:44:02.921936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.592 [2024-07-12 11:44:02.922405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.592 [2024-07-12 11:44:02.922469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.592 [2024-07-12 11:44:02.922501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.592 [2024-07-12 11:44:02.923146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.592 [2024-07-12 11:44:02.923646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.592 [2024-07-12 11:44:02.923659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.592 [2024-07-12 11:44:02.923669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.592 [2024-07-12 11:44:02.926589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.592 [2024-07-12 11:44:02.935171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.592 [2024-07-12 11:44:02.935537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.592 [2024-07-12 11:44:02.935558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.592 [2024-07-12 11:44:02.935568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.592 [2024-07-12 11:44:02.935752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.592 [2024-07-12 11:44:02.935935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.592 [2024-07-12 11:44:02.935948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.592 [2024-07-12 11:44:02.935958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.592 [2024-07-12 11:44:02.938916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.852 [2024-07-12 11:44:02.948579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.852 [2024-07-12 11:44:02.949027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.852 [2024-07-12 11:44:02.949049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.852 [2024-07-12 11:44:02.949059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.852 [2024-07-12 11:44:02.949253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.852 [2024-07-12 11:44:02.949455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.852 [2024-07-12 11:44:02.949469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.852 [2024-07-12 11:44:02.949479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.852 [2024-07-12 11:44:02.952518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.852 [2024-07-12 11:44:02.961739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.852 [2024-07-12 11:44:02.962194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.852 [2024-07-12 11:44:02.962251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.852 [2024-07-12 11:44:02.962282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.852 [2024-07-12 11:44:02.962733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.852 [2024-07-12 11:44:02.962931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.852 [2024-07-12 11:44:02.962944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.852 [2024-07-12 11:44:02.962953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.852 [2024-07-12 11:44:02.965856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.852 [2024-07-12 11:44:02.974812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.852 [2024-07-12 11:44:02.975163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.852 [2024-07-12 11:44:02.975218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.852 [2024-07-12 11:44:02.975250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.852 [2024-07-12 11:44:02.975769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.852 [2024-07-12 11:44:02.975963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.852 [2024-07-12 11:44:02.975976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.852 [2024-07-12 11:44:02.975986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.852 [2024-07-12 11:44:02.978886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.852 [2024-07-12 11:44:02.988013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.852 [2024-07-12 11:44:02.988485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.852 [2024-07-12 11:44:02.988545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.852 [2024-07-12 11:44:02.988577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.852 [2024-07-12 11:44:02.989223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.852 [2024-07-12 11:44:02.989802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.852 [2024-07-12 11:44:02.989816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.852 [2024-07-12 11:44:02.989825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.852 [2024-07-12 11:44:02.992785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.852 [2024-07-12 11:44:03.001223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.852 [2024-07-12 11:44:03.001673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.852 [2024-07-12 11:44:03.001695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.852 [2024-07-12 11:44:03.001705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.852 [2024-07-12 11:44:03.001887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.852 [2024-07-12 11:44:03.002069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.852 [2024-07-12 11:44:03.002082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.852 [2024-07-12 11:44:03.002090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.005035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.014452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.014908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.014929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.014939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.015123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.015306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.015318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.015327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.018275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.027632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.028011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.028033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.028043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.028226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.028416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.028428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.028437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.031290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.040802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.041245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.041315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.041346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.041751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.041945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.041958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.041968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.044866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.054013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.054469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.054494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.054504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.054687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.054871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.054884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.054894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.057753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.067175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.067603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.067625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.067636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.067830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.068024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.068036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.068046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.070943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.080619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.081019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.081043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.081054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.081253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.081466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.081480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.081490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.084602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.093978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.094383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.094407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.094418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.094618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.094821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.094835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.094845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.097958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.107515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.107899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.107922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.107933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.108132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.108332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.108346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.108356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.111478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.121032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.121495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.121517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.121528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.121727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.121927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.121940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.121950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.125061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.134559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.135004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.135027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.135038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.135237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.135444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.135458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.135471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.138586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.148034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.148438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.148461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.148472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.148677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.148883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.148896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.148906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.152155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.161740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.162122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.162145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.162156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.162361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.162599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.162614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.162625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.165956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.175314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.175710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.175733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.175744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.175949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.176154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.176168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.176178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.179474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.188867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.189309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.189337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.189347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.189559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.853 [2024-07-12 11:44:03.189765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.853 [2024-07-12 11:44:03.189778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.853 [2024-07-12 11:44:03.189788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.853 [2024-07-12 11:44:03.192994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:16.853 [2024-07-12 11:44:03.202395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:16.853 [2024-07-12 11:44:03.202707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.853 [2024-07-12 11:44:03.202731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:16.853 [2024-07-12 11:44:03.202742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:16.853 [2024-07-12 11:44:03.202941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:16.854 [2024-07-12 11:44:03.203143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:16.854 [2024-07-12 11:44:03.203156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:16.854 [2024-07-12 11:44:03.203165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:16.854 [2024-07-12 11:44:03.206382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.216177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.216640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.216663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.216674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.216893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.217111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.217126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.217136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.220564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.229869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.230334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.230358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.230369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.230606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.230825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.230839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.230849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.234277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.243407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.243862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.243885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.243896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.244102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.244308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.244321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.244331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.247548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.257048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.257496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.257520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.257531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.257737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.257943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.257957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.257967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.261177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.270667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.270983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.271006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.271018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.271223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.271436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.271450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.271463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.274817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.284232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.284719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.284742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.284754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.284974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.285193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.285208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.285218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.288522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.297869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.298322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.298344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.298355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.298568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.298775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.298789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.298799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.301998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.154 [2024-07-12 11:44:03.311420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.154 [2024-07-12 11:44:03.311921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.154 [2024-07-12 11:44:03.311945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.154 [2024-07-12 11:44:03.311956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.154 [2024-07-12 11:44:03.312175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.154 [2024-07-12 11:44:03.312401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.154 [2024-07-12 11:44:03.312416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.154 [2024-07-12 11:44:03.312427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.154 [2024-07-12 11:44:03.315652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.324944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.325410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.325433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.325444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.325650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.325856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.325870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.325880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.329092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.338680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.339159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.339182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.339193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.339403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.339610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.339623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.339633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.342841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.352169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.352644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.352668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.352679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.352886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.353093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.353106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.353116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.356327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.365737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.366230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.366253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.366265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.366494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.366715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.366729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.366740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.370180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.379495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.379986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.380011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.380023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.380242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.380468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.380483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.380493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.383920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.393203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.393675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.393698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.393710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.393915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.394122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.394136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.394146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.397354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.406754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.407154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.407176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.407187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.407398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.407604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.407617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.407631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.410838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.420511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.420934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.420958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.420970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.421190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.421418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.421439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.421450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.424873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.434039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.434406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.434429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.434440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.434641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.434843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.434856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.434866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.437981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.447547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.447999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.448055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.448087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.448712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.448998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.449015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.449029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.155 [2024-07-12 11:44:03.453491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.155 [2024-07-12 11:44:03.461266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.155 [2024-07-12 11:44:03.461718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.155 [2024-07-12 11:44:03.461738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.155 [2024-07-12 11:44:03.461748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.155 [2024-07-12 11:44:03.461937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.155 [2024-07-12 11:44:03.462126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.155 [2024-07-12 11:44:03.462138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.155 [2024-07-12 11:44:03.462148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.156 [2024-07-12 11:44:03.465148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.156 [2024-07-12 11:44:03.474391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.156 [2024-07-12 11:44:03.474840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.156 [2024-07-12 11:44:03.474908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.156 [2024-07-12 11:44:03.474941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.156 [2024-07-12 11:44:03.475601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.156 [2024-07-12 11:44:03.475802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.156 [2024-07-12 11:44:03.475815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.156 [2024-07-12 11:44:03.475824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.156 [2024-07-12 11:44:03.478673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.156 [2024-07-12 11:44:03.487597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.156 [2024-07-12 11:44:03.487956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.156 [2024-07-12 11:44:03.488014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.156 [2024-07-12 11:44:03.488046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.156 [2024-07-12 11:44:03.488709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.156 [2024-07-12 11:44:03.489156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.156 [2024-07-12 11:44:03.489169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.156 [2024-07-12 11:44:03.489179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.156 [2024-07-12 11:44:03.492067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.156 [2024-07-12 11:44:03.500826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.156 [2024-07-12 11:44:03.501280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.156 [2024-07-12 11:44:03.501336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.156 [2024-07-12 11:44:03.501368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.156 [2024-07-12 11:44:03.501815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.156 [2024-07-12 11:44:03.502009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.156 [2024-07-12 11:44:03.502022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.156 [2024-07-12 11:44:03.502032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.156 [2024-07-12 11:44:03.504931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.416 [2024-07-12 11:44:03.514302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.416 [2024-07-12 11:44:03.514770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.416 [2024-07-12 11:44:03.514791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.416 [2024-07-12 11:44:03.514801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.416 [2024-07-12 11:44:03.515001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.416 [2024-07-12 11:44:03.515201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.416 [2024-07-12 11:44:03.515215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.416 [2024-07-12 11:44:03.515225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.416 [2024-07-12 11:44:03.518324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.416 [2024-07-12 11:44:03.527515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.416 [2024-07-12 11:44:03.527955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.416 [2024-07-12 11:44:03.527976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.416 [2024-07-12 11:44:03.527986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.416 [2024-07-12 11:44:03.528170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.416 [2024-07-12 11:44:03.528353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.416 [2024-07-12 11:44:03.528365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.416 [2024-07-12 11:44:03.528375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.416 [2024-07-12 11:44:03.531326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.416 [2024-07-12 11:44:03.540683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.416 [2024-07-12 11:44:03.541065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.416 [2024-07-12 11:44:03.541085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.416 [2024-07-12 11:44:03.541095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.416 [2024-07-12 11:44:03.541279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.416 [2024-07-12 11:44:03.541469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.416 [2024-07-12 11:44:03.541482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.416 [2024-07-12 11:44:03.541498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.416 [2024-07-12 11:44:03.544346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.416 [2024-07-12 11:44:03.553818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.416 [2024-07-12 11:44:03.554209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.416 [2024-07-12 11:44:03.554266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.416 [2024-07-12 11:44:03.554298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.416 [2024-07-12 11:44:03.554927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.416 [2024-07-12 11:44:03.555122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.416 [2024-07-12 11:44:03.555135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.416 [2024-07-12 11:44:03.555144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.416 [2024-07-12 11:44:03.558098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.416 [2024-07-12 11:44:03.567011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.416 [2024-07-12 11:44:03.567447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.416 [2024-07-12 11:44:03.567469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.416 [2024-07-12 11:44:03.567479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.416 [2024-07-12 11:44:03.567662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.567845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.567858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.567866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.570817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.580171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.580638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.580697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.580729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.581264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.581480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.581494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.581504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.584431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.593349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.593814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.593868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.593901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.594446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.594641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.594654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.594664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.597579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.606590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.606934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.606956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.606966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.607160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.607354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.607367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.607383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.610513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.620035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.620433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.620455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.620466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.620670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.620854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.620866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.620876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.623882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.633224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.633699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.633720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.633730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.633916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.634100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.634113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.634122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.637071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.646321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.646772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.646837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.646869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.647382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.647591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.647603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.647612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.650533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.659478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.659941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.659998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.660030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.660500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.660694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.660707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.660716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.663630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.672638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.673100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.673121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.673131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.673316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.673526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.673540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.673553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.676529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.685866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.686303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.686325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.686336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.686536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.686737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.686750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.686760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.689643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.699047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.699432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.699492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.699524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.417 [2024-07-12 11:44:03.700168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.417 [2024-07-12 11:44:03.700623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.417 [2024-07-12 11:44:03.700637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.417 [2024-07-12 11:44:03.700646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.417 [2024-07-12 11:44:03.703652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.417 [2024-07-12 11:44:03.712401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.417 [2024-07-12 11:44:03.712840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.417 [2024-07-12 11:44:03.712862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.417 [2024-07-12 11:44:03.712873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.418 [2024-07-12 11:44:03.713072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.418 [2024-07-12 11:44:03.713272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.418 [2024-07-12 11:44:03.713284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.418 [2024-07-12 11:44:03.713294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.418 [2024-07-12 11:44:03.716411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.418 [2024-07-12 11:44:03.725558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.418 [2024-07-12 11:44:03.725955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.418 [2024-07-12 11:44:03.725975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.418 [2024-07-12 11:44:03.725985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.418 [2024-07-12 11:44:03.726168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.418 [2024-07-12 11:44:03.726352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.418 [2024-07-12 11:44:03.726364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.418 [2024-07-12 11:44:03.726373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.418 [2024-07-12 11:44:03.729323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.418 [2024-07-12 11:44:03.738682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.418 [2024-07-12 11:44:03.739121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.418 [2024-07-12 11:44:03.739142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.418 [2024-07-12 11:44:03.739152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.418 [2024-07-12 11:44:03.739335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.418 [2024-07-12 11:44:03.739547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.418 [2024-07-12 11:44:03.739560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.418 [2024-07-12 11:44:03.739569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.418 [2024-07-12 11:44:03.742491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.418 [2024-07-12 11:44:03.751916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.418 [2024-07-12 11:44:03.752435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.418 [2024-07-12 11:44:03.752494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.418 [2024-07-12 11:44:03.752526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.418 [2024-07-12 11:44:03.753173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.418 [2024-07-12 11:44:03.753732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.418 [2024-07-12 11:44:03.753743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.418 [2024-07-12 11:44:03.753752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.418 [2024-07-12 11:44:03.756752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.418 [2024-07-12 11:44:03.765355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.418 [2024-07-12 11:44:03.765816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.418 [2024-07-12 11:44:03.765839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.418 [2024-07-12 11:44:03.765850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.418 [2024-07-12 11:44:03.766049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.418 [2024-07-12 11:44:03.766243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.418 [2024-07-12 11:44:03.766256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.418 [2024-07-12 11:44:03.766266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.418 [2024-07-12 11:44:03.769360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.678 [2024-07-12 11:44:03.778706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.678 [2024-07-12 11:44:03.779183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.678 [2024-07-12 11:44:03.779239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.678 [2024-07-12 11:44:03.779271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.678 [2024-07-12 11:44:03.779721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.678 [2024-07-12 11:44:03.779916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.678 [2024-07-12 11:44:03.779929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.678 [2024-07-12 11:44:03.779939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.678 [2024-07-12 11:44:03.782846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.678 [2024-07-12 11:44:03.791938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.678 [2024-07-12 11:44:03.792387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.678 [2024-07-12 11:44:03.792408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.678 [2024-07-12 11:44:03.792418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.678 [2024-07-12 11:44:03.792601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.678 [2024-07-12 11:44:03.792785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.678 [2024-07-12 11:44:03.792797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.678 [2024-07-12 11:44:03.792806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.678 [2024-07-12 11:44:03.795759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.678 [2024-07-12 11:44:03.805073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.678 [2024-07-12 11:44:03.805522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.805544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.805554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.805738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.805922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.805938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.805947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.808897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.818147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.818572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.818593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.818603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.818786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.818969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.818981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.818990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.821849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.831270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.831720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.831742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.831752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.831936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.832120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.832132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.832142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.835091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.844540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.844948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.844969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.844979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.845163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.845346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.845358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.845367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.848318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.857743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.858186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.858208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.858218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.858418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.858636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.858649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.858659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.861768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.871154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.871628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.871649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.871660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.871854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.872048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.872061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.872071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.875150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.884321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.884793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.884815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.884825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.885008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.885192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.885204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.885213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.888104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.897457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.897892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.897947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.897985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.898646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.899087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.899100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.899110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.902006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.910679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.911128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.911149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.911159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.911343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.911557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.911571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.911580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.914498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.923848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.924291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.924312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.924322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.924533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.924727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.924741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.924750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.927661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.936997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.937443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.937464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.937474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.937657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.679 [2024-07-12 11:44:03.937841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.679 [2024-07-12 11:44:03.937856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.679 [2024-07-12 11:44:03.937865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.679 [2024-07-12 11:44:03.940814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.679 [2024-07-12 11:44:03.950161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.679 [2024-07-12 11:44:03.950630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.679 [2024-07-12 11:44:03.950687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.679 [2024-07-12 11:44:03.950719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.679 [2024-07-12 11:44:03.951140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.680 [2024-07-12 11:44:03.951324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.680 [2024-07-12 11:44:03.951336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.680 [2024-07-12 11:44:03.951345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.680 [2024-07-12 11:44:03.954295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.680 [2024-07-12 11:44:03.963310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.680 [2024-07-12 11:44:03.963771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.680 [2024-07-12 11:44:03.963827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.680 [2024-07-12 11:44:03.963859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.680 [2024-07-12 11:44:03.964280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.680 [2024-07-12 11:44:03.964488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.680 [2024-07-12 11:44:03.964502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.680 [2024-07-12 11:44:03.964512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.680 [2024-07-12 11:44:03.967433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.680 [2024-07-12 11:44:03.976439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.680 [2024-07-12 11:44:03.976865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.680 [2024-07-12 11:44:03.976886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.680 [2024-07-12 11:44:03.976896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.680 [2024-07-12 11:44:03.977078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.680 [2024-07-12 11:44:03.977261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.680 [2024-07-12 11:44:03.977273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.680 [2024-07-12 11:44:03.977282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.680 [2024-07-12 11:44:03.980228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.680 [2024-07-12 11:44:03.989591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.680 [2024-07-12 11:44:03.990054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.680 [2024-07-12 11:44:03.990112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.680 [2024-07-12 11:44:03.990158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.680 [2024-07-12 11:44:03.990818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.680 [2024-07-12 11:44:03.991308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.680 [2024-07-12 11:44:03.991321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.680 [2024-07-12 11:44:03.991331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.680 [2024-07-12 11:44:03.994209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.680 [2024-07-12 11:44:04.002717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.680 [2024-07-12 11:44:04.003165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.680 [2024-07-12 11:44:04.003216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.680 [2024-07-12 11:44:04.003249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.680 [2024-07-12 11:44:04.003908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.680 [2024-07-12 11:44:04.004383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.680 [2024-07-12 11:44:04.004396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.680 [2024-07-12 11:44:04.004406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.680 [2024-07-12 11:44:04.007275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.680 [2024-07-12 11:44:04.015912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.680 [2024-07-12 11:44:04.016365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.680 [2024-07-12 11:44:04.016432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.680 [2024-07-12 11:44:04.016465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.680 [2024-07-12 11:44:04.017062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.680 [2024-07-12 11:44:04.017246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.680 [2024-07-12 11:44:04.017258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.680 [2024-07-12 11:44:04.017267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.680 [2024-07-12 11:44:04.020211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.680 [2024-07-12 11:44:04.028984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.680 [2024-07-12 11:44:04.029443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.680 [2024-07-12 11:44:04.029499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.680 [2024-07-12 11:44:04.029539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.680 [2024-07-12 11:44:04.030184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.680 [2024-07-12 11:44:04.030519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.680 [2024-07-12 11:44:04.030532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.680 [2024-07-12 11:44:04.030541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.680 [2024-07-12 11:44:04.033561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.042303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.042752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.042809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.042841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.043401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.043612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.043625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.043634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.046560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.055380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.055836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.055858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.055868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.056051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.056235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.056247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.056256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.059203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.068547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.069006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.069064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.069096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.069588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.069773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.069791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.069800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.072647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.081709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.082158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.082179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.082190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.082373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.082587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.082599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.082609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.085537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.094823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.095281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.095303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.095314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.095525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.095721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.095733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.095744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.098662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.107948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.108289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.108311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.108322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.108542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.108743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.108755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.108765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.111880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.121422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.121884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.121906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.121917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.122112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.122306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.122318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.122327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.125346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.134799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.135229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.135250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.135260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.135467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.135661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.135674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.135683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.138598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.147931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.148367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.941 [2024-07-12 11:44:04.148437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.941 [2024-07-12 11:44:04.148469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.941 [2024-07-12 11:44:04.149113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.941 [2024-07-12 11:44:04.149638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.941 [2024-07-12 11:44:04.149652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.941 [2024-07-12 11:44:04.149662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.941 [2024-07-12 11:44:04.152560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.941 [2024-07-12 11:44:04.161199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.941 [2024-07-12 11:44:04.161598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.161621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.161647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.161831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.162014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.162027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.162037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.164935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.174448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.174884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.174905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.174915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.175098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.175288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.175300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.175310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.178258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.187867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.188343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.188412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.188446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.189091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.189717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.189731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.189740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.192784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.201267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.201648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.201670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.201680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.201874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.202067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.202083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.202092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.205126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.214473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.214863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.214884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.214895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.215088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.215282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.215295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.215304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.218191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.227723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.228076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.228099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.228110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.228294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.228505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.228519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.228529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.231449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.240889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.241354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.241424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.241458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.241868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.242050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.242062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.242072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.245025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.254116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.254547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.254569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.254579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.254763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.254947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.254959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.254968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.257925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.267277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.267680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.267738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.267770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.268428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.268926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.268944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.268958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.273422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.281020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.281496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.281519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.281542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.281730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.281917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.281930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.281939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:17.942 [2024-07-12 11:44:04.284913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:17.942 [2024-07-12 11:44:04.294379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:17.942 [2024-07-12 11:44:04.294769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.942 [2024-07-12 11:44:04.294791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:17.942 [2024-07-12 11:44:04.294804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:17.942 [2024-07-12 11:44:04.294997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:17.942 [2024-07-12 11:44:04.295190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:17.942 [2024-07-12 11:44:04.295204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:17.942 [2024-07-12 11:44:04.295213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.203 [2024-07-12 11:44:04.298257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1174296 Killed "${NVMF_APP[@]}" "$@" 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1175936 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1175936 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1175936 ']' 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.203 [2024-07-12 11:44:04.307819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.203 [2024-07-12 11:44:04.308273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-07-12 11:44:04.308296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.203 [2024-07-12 11:44:04.308308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.203 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:18.203 [2024-07-12 11:44:04.308516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.308717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 11:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.204 [2024-07-12 11:44:04.308731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.308742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.311861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.321241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.321710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.321732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.321746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.321946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.322146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.322159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.322168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.325282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.334669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.335060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.335081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.335092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.335290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.335495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.335509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.335520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.338643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.347920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.348339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.348361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.348372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.348572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.348767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.348779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.348789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.351809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.361313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.361799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.361821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.361832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.362032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.362233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.362250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.362266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.365395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.374746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.375215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.375237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.375248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.375448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.375645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.375658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.375668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.378751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.387490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:18.204 [2024-07-12 11:44:04.387560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.204 [2024-07-12 11:44:04.388045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.388419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.388442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.388453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.388648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.388843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.388857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.388867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.391899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.401461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.401935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.401957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.401967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.402162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.402357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.402374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.402389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.405466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.414946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.415402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.415426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.415437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.415640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.415842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.415856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.415866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.418990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.428375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.428831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.428854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.428866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.429069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.429270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.429283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.429297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.432426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 [2024-07-12 11:44:04.441825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.204 [2024-07-12 11:44:04.442321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-07-12 11:44:04.442345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.204 [2024-07-12 11:44:04.442357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.204 [2024-07-12 11:44:04.442566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.204 [2024-07-12 11:44:04.442770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.204 [2024-07-12 11:44:04.442783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.204 [2024-07-12 11:44:04.442793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.204 [2024-07-12 11:44:04.445919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.204 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.205 [2024-07-12 11:44:04.455329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.455808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.455831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.455842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.456045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.456246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.456260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.456273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.459401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.205 [2024-07-12 11:44:04.468801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.469249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.469271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.469282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.469491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.469694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.469708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.469718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.472850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.205 [2024-07-12 11:44:04.482300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.482709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.482732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.482743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.482945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.483147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.483161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.483170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.486297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.205 [2024-07-12 11:44:04.495712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.496103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.496126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.496141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.496340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.496546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.496567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.496578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.499700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.205 [2024-07-12 11:44:04.501383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:18.205 [2024-07-12 11:44:04.509138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.509584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.509608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.509619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.509816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.510014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.510027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.510037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.513077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.205 [2024-07-12 11:44:04.522420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.522837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.522859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.522870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.523067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.523264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.523278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.523288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.526330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.205 [2024-07-12 11:44:04.535834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.536276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.536298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.536309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.536513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.536714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.536728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.536737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.539773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.205 [2024-07-12 11:44:04.549271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.205 [2024-07-12 11:44:04.549670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-07-12 11:44:04.549692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.205 [2024-07-12 11:44:04.549703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.205 [2024-07-12 11:44:04.549900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.205 [2024-07-12 11:44:04.550095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.205 [2024-07-12 11:44:04.550108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.205 [2024-07-12 11:44:04.550118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.205 [2024-07-12 11:44:04.553173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.562666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.563033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.563055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.563066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.563262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.563468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.563481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.563492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.566606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.576033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.576519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.576543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.576553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.576751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.576948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.576961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.576975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.580021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.589343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.589722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.589745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.589756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.589952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.590150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.590163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.590172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.593206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.602750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.603225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.603247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.603258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.603460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.603657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.603670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.603679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.606740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.616032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.616485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.616509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.616520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.616722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.616923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.616936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.616946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.620064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.629503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.629846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.629872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.629883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.630079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.630276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.630289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.630299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.633395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.642870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.643334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.643355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.643366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.643569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.643767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.643782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.643791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.646827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.656307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.656802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.656824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.656836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.657038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.657241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.657255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.657264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.660327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.669720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.670194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.670216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.670226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.670428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.670631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.670644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.670654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.673683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.683008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.683419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.683441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.683452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.683650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.683847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.683860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.683869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.686905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.696403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.696781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.696804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.696815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.697011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.697205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.697218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.697228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.700260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.709824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.710309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.710332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.710343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.710545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.710742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.710755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.710768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.713800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.723110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.723607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.723629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.723640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.723837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.724022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.724035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.724044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.727075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.727587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:18.466 [2024-07-12 11:44:04.727618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:18.466 [2024-07-12 11:44:04.727633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:18.466 [2024-07-12 11:44:04.727643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:18.466 [2024-07-12 11:44:04.727652] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:18.466 [2024-07-12 11:44:04.727721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:18.466 [2024-07-12 11:44:04.727782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.466 [2024-07-12 11:44:04.727793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:18.466 [2024-07-12 11:44:04.736511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.736872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.736898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.736911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.737116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.737322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.737335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.737346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.740493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.466 [2024-07-12 11:44:04.749961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.466 [2024-07-12 11:44:04.750393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-07-12 11:44:04.750425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.466 [2024-07-12 11:44:04.750436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.466 [2024-07-12 11:44:04.750653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.466 [2024-07-12 11:44:04.750851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.466 [2024-07-12 11:44:04.750864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.466 [2024-07-12 11:44:04.750875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.466 [2024-07-12 11:44:04.754019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.467 [2024-07-12 11:44:04.763457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.467 [2024-07-12 11:44:04.763866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-07-12 11:44:04.763890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.467 [2024-07-12 11:44:04.763902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.467 [2024-07-12 11:44:04.764105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.467 [2024-07-12 11:44:04.764309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.467 [2024-07-12 11:44:04.764322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.467 [2024-07-12 11:44:04.764332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.467 [2024-07-12 11:44:04.767465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.467 [2024-07-12 11:44:04.776894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.467 [2024-07-12 11:44:04.777340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-07-12 11:44:04.777363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.467 [2024-07-12 11:44:04.777374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.467 [2024-07-12 11:44:04.777581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.467 [2024-07-12 11:44:04.777784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.467 [2024-07-12 11:44:04.777797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.467 [2024-07-12 11:44:04.777807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.467 [2024-07-12 11:44:04.780936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.467 [2024-07-12 11:44:04.790365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.467 [2024-07-12 11:44:04.790687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-07-12 11:44:04.790710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.467 [2024-07-12 11:44:04.790720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.467 [2024-07-12 11:44:04.790922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.467 [2024-07-12 11:44:04.791124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.467 [2024-07-12 11:44:04.791137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.467 [2024-07-12 11:44:04.791151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.467 [2024-07-12 11:44:04.794280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.467 [2024-07-12 11:44:04.803879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.467 [2024-07-12 11:44:04.804370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-07-12 11:44:04.804399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.467 [2024-07-12 11:44:04.804410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.467 [2024-07-12 11:44:04.804612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.467 [2024-07-12 11:44:04.804813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.467 [2024-07-12 11:44:04.804827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.467 [2024-07-12 11:44:04.804837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.467 [2024-07-12 11:44:04.807969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.467 [2024-07-12 11:44:04.817399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.467 [2024-07-12 11:44:04.817750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-07-12 11:44:04.817775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.467 [2024-07-12 11:44:04.817786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.467 [2024-07-12 11:44:04.817990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.467 [2024-07-12 11:44:04.818196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.467 [2024-07-12 11:44:04.818209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.467 [2024-07-12 11:44:04.818219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.467 [2024-07-12 11:44:04.821370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.830836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.831323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.727 [2024-07-12 11:44:04.831348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.727 [2024-07-12 11:44:04.831360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.727 [2024-07-12 11:44:04.831570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.727 [2024-07-12 11:44:04.831776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.727 [2024-07-12 11:44:04.831790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.727 [2024-07-12 11:44:04.831801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.727 [2024-07-12 11:44:04.834943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.844394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.844738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.727 [2024-07-12 11:44:04.844760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.727 [2024-07-12 11:44:04.844772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.727 [2024-07-12 11:44:04.844976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.727 [2024-07-12 11:44:04.845178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.727 [2024-07-12 11:44:04.845191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.727 [2024-07-12 11:44:04.845202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.727 [2024-07-12 11:44:04.848340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.857784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.858254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.727 [2024-07-12 11:44:04.858276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.727 [2024-07-12 11:44:04.858287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.727 [2024-07-12 11:44:04.858499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.727 [2024-07-12 11:44:04.858704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.727 [2024-07-12 11:44:04.858717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.727 [2024-07-12 11:44:04.858727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.727 [2024-07-12 11:44:04.861856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.871289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.871769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.727 [2024-07-12 11:44:04.871791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.727 [2024-07-12 11:44:04.871802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.727 [2024-07-12 11:44:04.872004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.727 [2024-07-12 11:44:04.872205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.727 [2024-07-12 11:44:04.872218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.727 [2024-07-12 11:44:04.872228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.727 [2024-07-12 11:44:04.875359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.884776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.885147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.727 [2024-07-12 11:44:04.885170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.727 [2024-07-12 11:44:04.885181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.727 [2024-07-12 11:44:04.885395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.727 [2024-07-12 11:44:04.885598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.727 [2024-07-12 11:44:04.885612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.727 [2024-07-12 11:44:04.885622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.727 [2024-07-12 11:44:04.888739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.898132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.898610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.727 [2024-07-12 11:44:04.898633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.727 [2024-07-12 11:44:04.898644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.727 [2024-07-12 11:44:04.898843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.727 [2024-07-12 11:44:04.899044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.727 [2024-07-12 11:44:04.899058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.727 [2024-07-12 11:44:04.899073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.727 [2024-07-12 11:44:04.902197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.911585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.912052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.727 [2024-07-12 11:44:04.912075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.727 [2024-07-12 11:44:04.912085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.727 [2024-07-12 11:44:04.912284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.727 [2024-07-12 11:44:04.912492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.727 [2024-07-12 11:44:04.912514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.727 [2024-07-12 11:44:04.912524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.727 [2024-07-12 11:44:04.915643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.727 [2024-07-12 11:44:04.925020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.727 [2024-07-12 11:44:04.925461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:04.925483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:04.925495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:04.925696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:04.925897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:04.925911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:04.925924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:04.929039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:04.938423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:04.938861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:04.938884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:04.938895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:04.939133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:04.939334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:04.939348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:04.939358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:04.942472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:04.951851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:04.952326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:04.952349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:04.952360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:04.952566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:04.952767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:04.952780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:04.952789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:04.955905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:04.965293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:04.965778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:04.965803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:04.965814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:04.966017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:04.966221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:04.966234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:04.966245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:04.969382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:04.978818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:04.979301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:04.979325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:04.979337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:04.979547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:04.979753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:04.979767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:04.979776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:04.982924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:04.992332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:04.992727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:04.992751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:04.992761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:04.992963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:04.993165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:04.993178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:04.993189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:04.996313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:05.005720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:05.006097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:05.006119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:05.006130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:05.006332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:05.006539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:05.006553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:05.006563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:05.009688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:05.019084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:05.019559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:05.019583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:05.019594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:05.019800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:05.020001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:05.020014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:05.020024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:05.023145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:05.032581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.728 [2024-07-12 11:44:05.032982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.728 [2024-07-12 11:44:05.033004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.728 [2024-07-12 11:44:05.033015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.728 [2024-07-12 11:44:05.033215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.728 [2024-07-12 11:44:05.033423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.728 [2024-07-12 11:44:05.033437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.728 [2024-07-12 11:44:05.033447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.728 [2024-07-12 11:44:05.036567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.728 [2024-07-12 11:44:05.045960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.729 [2024-07-12 11:44:05.046452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.729 [2024-07-12 11:44:05.046478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.729 [2024-07-12 11:44:05.046491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.729 [2024-07-12 11:44:05.046695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.729 [2024-07-12 11:44:05.046898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.729 [2024-07-12 11:44:05.046911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.729 [2024-07-12 11:44:05.046922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.729 [2024-07-12 11:44:05.050045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.729 [2024-07-12 11:44:05.059479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.729 [2024-07-12 11:44:05.059955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.729 [2024-07-12 11:44:05.059980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.729 [2024-07-12 11:44:05.059992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.729 [2024-07-12 11:44:05.060194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.729 [2024-07-12 11:44:05.060403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.729 [2024-07-12 11:44:05.060421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.729 [2024-07-12 11:44:05.060432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.729 [2024-07-12 11:44:05.063551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.729 [2024-07-12 11:44:05.072950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.729 [2024-07-12 11:44:05.073428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.729 [2024-07-12 11:44:05.073451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.729 [2024-07-12 11:44:05.073463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.729 [2024-07-12 11:44:05.073665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.729 [2024-07-12 11:44:05.073867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.729 [2024-07-12 11:44:05.073881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.729 [2024-07-12 11:44:05.073891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.729 [2024-07-12 11:44:05.077013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.989 [2024-07-12 11:44:05.086426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.989 [2024-07-12 11:44:05.086900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.989 [2024-07-12 11:44:05.086923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.989 [2024-07-12 11:44:05.086935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.989 [2024-07-12 11:44:05.087137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.989 [2024-07-12 11:44:05.087340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.989 [2024-07-12 11:44:05.087354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.989 [2024-07-12 11:44:05.087364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.989 [2024-07-12 11:44:05.090491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.989 [2024-07-12 11:44:05.099869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.989 [2024-07-12 11:44:05.100336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.989 [2024-07-12 11:44:05.100358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.989 [2024-07-12 11:44:05.100369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.989 [2024-07-12 11:44:05.100574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.989 [2024-07-12 11:44:05.100776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.989 [2024-07-12 11:44:05.100790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.989 [2024-07-12 11:44:05.100800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.989 [2024-07-12 11:44:05.103924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.989 [2024-07-12 11:44:05.113303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.989 [2024-07-12 11:44:05.113750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.989 [2024-07-12 11:44:05.113773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.989 [2024-07-12 11:44:05.113784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.989 [2024-07-12 11:44:05.113985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.989 [2024-07-12 11:44:05.114188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.989 [2024-07-12 11:44:05.114202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.989 [2024-07-12 11:44:05.114212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.989 [2024-07-12 11:44:05.117332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.989 [2024-07-12 11:44:05.126730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.989 [2024-07-12 11:44:05.127198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.989 [2024-07-12 11:44:05.127221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.127232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.127439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.127641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.127661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.127671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.130796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 [2024-07-12 11:44:05.140190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.140661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.140684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.140695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.140897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.141099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.141112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.141124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.144247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.990 [2024-07-12 11:44:05.153642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.154099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.154121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.154133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.154334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.154542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.154556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.154566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.157681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 [2024-07-12 11:44:05.167068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.167421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.167445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.167456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.167657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.167859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.167872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.167882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.171004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 [2024-07-12 11:44:05.180587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.180913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.180936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.180948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.181148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.181350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.181363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.181373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.184503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.990 [2024-07-12 11:44:05.190120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:18.990 [2024-07-12 11:44:05.194071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.194531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.194553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.194566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.194767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.194969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.194982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.194991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.198110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.990 [2024-07-12 11:44:05.207502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:18.990 [2024-07-12 11:44:05.207952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.207975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.207986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.990 [2024-07-12 11:44:05.208187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.990 [2024-07-12 11:44:05.208393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.208407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.208417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.211530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 [2024-07-12 11:44:05.220927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.221407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.221431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.221444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.221646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.221850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.221863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.221874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.225024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 [2024-07-12 11:44:05.234476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.234954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.234978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.234990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.235195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.235404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.235419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.235430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.990 [2024-07-12 11:44:05.238561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.990 [2024-07-12 11:44:05.247978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.990 [2024-07-12 11:44:05.248439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.990 [2024-07-12 11:44:05.248464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.990 [2024-07-12 11:44:05.248475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.990 [2024-07-12 11:44:05.248679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.990 [2024-07-12 11:44:05.248882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.990 [2024-07-12 11:44:05.248895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.990 [2024-07-12 11:44:05.248905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.991 [2024-07-12 11:44:05.252034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.991 [2024-07-12 11:44:05.261454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.991 [2024-07-12 11:44:05.261923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.991 [2024-07-12 11:44:05.261945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.991 [2024-07-12 11:44:05.261956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.991 [2024-07-12 11:44:05.262159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.991 [2024-07-12 11:44:05.262361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.991 [2024-07-12 11:44:05.262374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.991 [2024-07-12 11:44:05.262389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.991 [2024-07-12 11:44:05.265511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.991 [2024-07-12 11:44:05.274931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.991 [2024-07-12 11:44:05.275390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.991 [2024-07-12 11:44:05.275413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.991 [2024-07-12 11:44:05.275428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.991 [2024-07-12 11:44:05.275629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.991 [2024-07-12 11:44:05.275832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.991 [2024-07-12 11:44:05.275845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.991 [2024-07-12 11:44:05.275855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.991 [2024-07-12 11:44:05.278975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.991 [2024-07-12 11:44:05.288386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.991 [2024-07-12 11:44:05.288833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.991 [2024-07-12 11:44:05.288856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.991 [2024-07-12 11:44:05.288867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.991 [2024-07-12 11:44:05.289070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.991 [2024-07-12 11:44:05.289272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.991 [2024-07-12 11:44:05.289285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.991 [2024-07-12 11:44:05.289295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.991 [2024-07-12 11:44:05.292423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.991 Malloc0 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.991 [2024-07-12 11:44:05.301808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.991 [2024-07-12 11:44:05.302288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.991 [2024-07-12 11:44:05.302310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.991 [2024-07-12 11:44:05.302322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.991 [2024-07-12 11:44:05.302528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.991 [2024-07-12 11:44:05.302731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.991 [2024-07-12 11:44:05.302745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.991 [2024-07-12 11:44:05.302756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.991 [2024-07-12 11:44:05.305872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:18.991 [2024-07-12 11:44:05.315271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:18.991 [2024-07-12 11:44:05.315594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.991 [2024-07-12 11:44:05.315617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d280 with addr=10.0.0.2, port=4420 00:38:18.991 [2024-07-12 11:44:05.315628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:18.991 [2024-07-12 11:44:05.315828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:18.991 [2024-07-12 11:44:05.316029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:18.991 [2024-07-12 11:44:05.316042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:18.991 [2024-07-12 11:44:05.316052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:18.991 [2024-07-12 11:44:05.317735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:18.991 [2024-07-12 11:44:05.319179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:18.991 11:44:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1175009 00:38:18.991 [2024-07-12 11:44:05.328749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:19.250 [2024-07-12 11:44:05.451697] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:29.242 00:38:29.242 Latency(us) 00:38:29.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.242 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:29.242 Verification LBA range: start 0x0 length 0x4000 00:38:29.242 Nvme1n1 : 15.01 6961.53 27.19 11813.59 0.00 6795.65 491.52 41259.19 00:38:29.242 =================================================================================================================== 00:38:29.242 Total : 6961.53 27.19 11813.59 0.00 6795.65 491.52 41259.19 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:29.242 rmmod nvme_tcp 00:38:29.242 rmmod nvme_fabrics 00:38:29.242 rmmod nvme_keyring 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1175936 ']' 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1175936 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1175936 ']' 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1175936 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1175936 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1175936' 00:38:29.242 killing process with pid 1175936 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1175936 00:38:29.242 11:44:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1175936 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:30.618 11:44:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.152 11:44:18 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:33.152 00:38:33.152 real 0m30.087s 00:38:33.152 user 1m15.758s 00:38:33.152 sys 0m6.358s 00:38:33.152 11:44:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:33.152 11:44:18 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.152 ************************************ 00:38:33.152 END TEST nvmf_bdevperf 00:38:33.152 ************************************ 00:38:33.152 11:44:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:33.152 11:44:19 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:33.152 11:44:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:33.152 11:44:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:33.152 11:44:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:33.152 ************************************ 00:38:33.152 START TEST nvmf_target_disconnect 00:38:33.152 ************************************ 00:38:33.152 11:44:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:33.152 * Looking for test storage... 00:38:33.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:33.152 11:44:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.152 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:33.152 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.152 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.152 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.152 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:38:33.153 11:44:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:38:38.435 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:38.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:38.436 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:38.436 Found net devices under 0000:86:00.0: cvl_0_0 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:38.436 Found net devices under 0000:86:00.1: cvl_0_1 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:38.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:38.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:38:38.436 00:38:38.436 --- 10.0.0.2 ping statistics --- 00:38:38.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:38.436 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:38.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:38.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:38:38.436 00:38:38.436 --- 10.0.0.1 ping statistics --- 00:38:38.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:38.436 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:38.436 ************************************ 00:38:38.436 START TEST nvmf_target_disconnect_tc1 00:38:38.436 ************************************ 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.436 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:38.437 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.437 [2024-07-12 11:44:24.541838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.437 [2024-07-12 11:44:24.541906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d000 with addr=10.0.0.2, port=4420 00:38:38.437 [2024-07-12 11:44:24.541971] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:38.437 [2024-07-12 11:44:24.541984] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:38.437 [2024-07-12 11:44:24.541995] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:38:38.437 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:38.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:38.437 Initializing NVMe Controllers 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:38.437 00:38:38.437 real 0m0.178s 00:38:38.437 user 0m0.071s 00:38:38.437 sys 0m0.106s 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:38.437 ************************************ 00:38:38.437 END TEST nvmf_target_disconnect_tc1 00:38:38.437 ************************************ 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:38.437 ************************************ 00:38:38.437 START TEST nvmf_target_disconnect_tc2 00:38:38.437 ************************************ 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1181324 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1181324 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1181324 ']' 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:38.437 11:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:38.437 [2024-07-12 11:44:24.703834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:38.437 [2024-07-12 11:44:24.703914] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:38.437 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.696 [2024-07-12 11:44:24.829794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:38.696 [2024-07-12 11:44:25.043237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:38.697 [2024-07-12 11:44:25.043286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:38.697 [2024-07-12 11:44:25.043298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:38.697 [2024-07-12 11:44:25.043307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:38.697 [2024-07-12 11:44:25.043316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:38.697 [2024-07-12 11:44:25.043469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:38:38.697 [2024-07-12 11:44:25.043560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:38:38.697 [2024-07-12 11:44:25.043630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:38:38.697 [2024-07-12 11:44:25.043654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.265 Malloc0 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.265 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.523 [2024-07-12 11:44:25.624597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:39.523 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.523 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:39.523 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.523 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.523 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.523 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:39.523 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.524 [2024-07-12 11:44:25.652832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1181568 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:39.524 11:44:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:39.524 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.433 11:44:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1181324 00:38:41.433 11:44:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 [2024-07-12 11:44:27.702105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 [2024-07-12 11:44:27.702504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 [2024-07-12 11:44:27.702862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Write completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.433 Read completed with error (sct=0, sc=8) 00:38:41.433 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Write completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 Read completed with error (sct=0, sc=8) 00:38:41.434 starting I/O failed 00:38:41.434 [2024-07-12 11:44:27.703226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:41.434 [2024-07-12 11:44:27.703485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.703510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.703729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.703754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.703884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.703900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.704155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.704197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.704450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.704493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.704781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.704823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.705002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.705057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.705203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.705245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.705382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.705398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.705571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.705613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.705845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.705887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.706021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.706063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.706346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.706361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.706581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.706597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.706746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.706761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.706975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.707017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.707145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.707187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.707402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.707444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.707698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.707740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.707961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.708002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.708245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.708260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.708432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.708453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.708547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.708566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.708788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.708812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.708917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.708934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.709227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.709242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.709423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.709439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.709598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.709613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.709825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.709840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.709989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.710004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.710179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.710195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.710293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.710308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.710460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.710476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.434 qpair failed and we were unable to recover it. 00:38:41.434 [2024-07-12 11:44:27.710699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.434 [2024-07-12 11:44:27.710714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.710816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.710831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.710934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.710949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.711138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.711178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.711403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.711447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.711711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.711752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.712027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.712068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.712323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.712364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.712673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.712716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.712918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.712959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.713211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.713252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.713464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.713479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.713649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.713664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.713754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.713770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.713907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.713922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.714010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.714062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.714391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.714434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.714642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.714684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.714952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.714993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.715247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.715288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.715531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.715574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.715787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.715828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.716059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.716073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.716253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.716293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.716541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.716585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.716780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.716820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.716954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.716994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.717208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.717249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.717480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.717496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.717662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.717702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.717905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.717946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.718254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.718295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.718546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.718589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.718733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.718774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.718958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.718973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.719160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.719201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.719441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.719485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.719694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.719735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.719959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.719999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.720281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.435 [2024-07-12 11:44:27.720322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.435 qpair failed and we were unable to recover it. 00:38:41.435 [2024-07-12 11:44:27.720509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.720552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.720820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.720861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.721076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.721118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.721402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.721444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.721613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.721655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.721939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.721980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.722126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.722142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.722370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.722423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.722599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.722640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.722843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.722884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.723074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.723116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.723333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.723347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.723538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.723583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.723735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.723795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.724009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.724023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.724224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.724265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.724485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.724528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.724740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.724781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.724993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.725034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.725188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.725202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.725514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.725556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.725726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.725768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.725968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.726008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.726309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.726325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.726437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.726452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.726651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.726692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.726967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.727009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.727288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.727329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.727548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.727590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.727808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.727850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.727985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.728026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.728287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.728328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.728564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.728608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.728827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.728867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.729166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.729207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.729398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.729415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.729576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.729592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.729680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.729693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.436 [2024-07-12 11:44:27.729870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.436 [2024-07-12 11:44:27.729885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.436 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.730035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.730050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.730202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.730217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.730375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.730395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.730474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.730488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.730632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.730648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.730807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.730848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.731119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.731160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.731439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.731466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.731620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.731635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.731730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.731744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.731965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.732006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.732151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.732192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.732457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.732511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.732659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.732674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.732814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.732831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.733107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.733148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.733414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.733457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.733663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.733677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.733906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.733948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.734140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.734181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.734464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.734507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.734649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.734691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.734897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.734937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.735130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.735171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.735470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.735513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.735738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.735779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.736002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.736043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.736245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.736286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.736494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.736510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.736726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.736741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.437 qpair failed and we were unable to recover it. 00:38:41.437 [2024-07-12 11:44:27.736905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.437 [2024-07-12 11:44:27.736920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.737205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.737247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.737529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.737571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.737792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.737833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.737972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.738013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.738218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.738259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.738465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.738508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.738680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.738721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.738947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.739000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.739195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.739235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.739440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.739456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.739597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.739612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.739769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.739784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.739943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.739984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.740261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.740302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.740635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.740650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.740829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.740844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.741149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.741191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.741409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.741451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.741714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.741729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.741826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.741840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.741987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.742041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.742183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.742224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.742462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.742505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.742655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.742672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.742781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.742795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.743014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.743030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.743180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.743195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.743412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.743427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.743629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.743643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.743749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.743790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.744075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.744116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.744373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.744424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.744723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.744764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.744970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.745011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.745272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.745287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.745368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.745388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.745506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.745547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.745811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.745863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.746089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.438 [2024-07-12 11:44:27.746130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.438 qpair failed and we were unable to recover it. 00:38:41.438 [2024-07-12 11:44:27.746345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.746360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.746531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.746547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.746775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.746816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.747024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.747065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.747322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.747363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.747681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.747723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.747913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.747954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.748153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.748169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.748338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.748393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.748674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.748716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.748996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.749037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.749329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.749371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.749698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.749738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.749966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.750007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.750225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.750278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.750435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.750450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.750533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.750564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.750788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.750804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.751072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.751088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.751346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.751397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.751624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.751666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.751833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.751874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.752172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.752213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.752393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.752437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.752626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.752644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.752811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.752852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.753055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.753095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.753331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.753373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.753589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.753631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.753916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.753956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.754245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.754285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.754562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.754605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.754821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.754912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.755232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.755273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.755480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.755524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.755689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.755703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.755834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.755876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.756027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.756070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.756272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.756314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.756541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.756557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.756710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.756751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.439 [2024-07-12 11:44:27.756964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.439 [2024-07-12 11:44:27.757006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.439 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.757334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.757386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.757537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.757579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.757834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.757876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.758148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.758189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.758414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.758457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.758725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.758766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.759043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.759084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.759364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.759383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.759653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.759707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.760003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.760044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.760258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.760299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.760529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.760571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.760767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.760809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.761139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.761179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.761487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.761504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.761685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.761725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.762042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.762082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.762396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.762442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.762597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.762637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.762827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.762868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.763154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.763195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.763417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.763460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.763695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.763742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.764063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.764105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.764326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.764367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.764642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.764657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.764877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.764918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.765201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.765242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.765521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.765564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.765849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.765891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.766109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.766151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.766425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.766441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.766650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.766666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.766781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.766889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.766902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.767095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.767111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.767284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.767326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.767485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.767528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.767760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.767801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.768102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.768143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.768451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.440 [2024-07-12 11:44:27.768496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.440 qpair failed and we were unable to recover it. 00:38:41.440 [2024-07-12 11:44:27.768789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.768831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.769092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.769133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.769298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.769341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.769595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.769611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.769785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.769827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.770055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.770096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.770399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.770431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.770588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.770603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.770770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.770786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.771049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.771091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.771352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.771367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.771559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.771575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.771679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.771735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.772017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.772072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.772366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.772421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.772626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.772669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.772940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.772982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.773268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.773310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.773463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.773479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.773634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.773666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.773977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.774019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.774239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.774286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.774485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.774528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.774788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.774829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.775114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.775156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.775429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.775471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.775763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.775805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.776071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.776112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.776330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.776345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.776445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.776497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.776790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.776831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.777042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.777084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.777283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.777324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.441 qpair failed and we were unable to recover it. 00:38:41.441 [2024-07-12 11:44:27.777558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.441 [2024-07-12 11:44:27.777603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.777866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.777907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.778058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.778099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.778390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.778433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.778595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.778636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.778788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.778830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.779029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.779069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.779298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.779341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.779521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.779564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.779728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.779769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.780990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.781022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.781160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.781175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.781391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.781437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.781653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.781695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.781976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.782019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.782313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.782355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.782647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.782689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.782944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.782987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.783187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.783229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.783486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.783531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.783767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.783809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.784068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.784110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.784344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.784407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.784589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.784604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.784751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.784767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.784854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.442 [2024-07-12 11:44:27.784867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.442 qpair failed and we were unable to recover it. 00:38:41.442 [2024-07-12 11:44:27.785110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.785147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.785422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.785468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.785706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.785754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.785901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.785942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.786153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.786197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.786415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.786433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.786614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.786631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.786781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.786798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.787007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.787023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.787221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.787237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.787320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.787333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.787494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.787511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.787622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.721 [2024-07-12 11:44:27.787637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.721 qpair failed and we were unable to recover it. 00:38:41.721 [2024-07-12 11:44:27.787732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.787746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.787901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.787918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.788071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.788113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.788401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.788444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.788704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.788746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.789026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.789082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.789296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.789338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.789564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.789580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.789742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.789758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.789839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.789854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.790103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.790146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.790391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.790432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.790645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.790661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.790772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.790788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.790941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.790956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.791139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.791181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.791504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.791595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.791838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.791882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.792147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.792170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.792348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.792372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.792559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.792580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.792820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.792840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.793065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.793086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.793285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.793307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.793571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.793594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.793789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.793810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.793983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.794025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.794319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.794361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.794643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.794664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.794758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.794783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.794907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.794927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.795177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.795197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.795374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.795409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.795567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.795582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.795701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.795742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.795950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.795991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.796218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.796259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.796505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.796521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.722 [2024-07-12 11:44:27.796676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.722 [2024-07-12 11:44:27.796692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.722 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.796959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.797001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.797253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.797300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.797525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.797542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.797690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.797706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.797905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.797947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.798290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.798333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.798583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.798598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.798793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.798834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.799041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.799083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.799225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.799266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.799543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.799560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.799661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.799675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.799940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.799981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.800205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.800247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.800542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.800585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.800739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.800781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.800987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.801029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.801363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.801462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.801663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.801712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.802024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.802067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.802306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.802349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.802590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.802633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.804002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.804041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.804311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.804356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.804664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.804706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.804864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.804906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.805168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.805210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.805423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.805466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.805644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.805687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.805948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.805989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.806222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.806270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.806564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.806607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.806942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.806984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.807271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.807313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.807617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.807661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.807875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.807917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.808130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.808171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.808404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.808447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.808745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.808761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.808961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.723 [2024-07-12 11:44:27.808977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.723 qpair failed and we were unable to recover it. 00:38:41.723 [2024-07-12 11:44:27.809159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.809202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.809426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.809469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.809795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.809836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.810048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.810090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.810317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.810364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.810550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.810567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.810661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.810676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.810884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.810926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.811191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.811233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.811449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.811492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.811733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.811776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.811989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.812045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.812310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.812351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.812550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.812566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.812733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.812749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.812853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.812868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.813062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.813104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.813350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.813417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.813685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.813700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.813794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.813835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.814111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.814152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.814351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.814419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.814515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.814531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.814744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.814785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.815059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.815100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.815389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.815434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.815658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.815700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.815938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.815980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.816242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.816283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.816569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.816586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.816700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.816718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.816907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.816949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.817100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.817142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.817368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.817424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.817511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.817525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.817764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.817805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.818024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.818067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.818361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.818413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.818657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.818699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.818917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.818958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.819308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.819351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.819564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.819580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.724 [2024-07-12 11:44:27.819687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.724 [2024-07-12 11:44:27.819703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.724 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.819844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.819860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.820145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.820187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.820468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.820518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.820736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.820752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.820853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.820870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.820948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.820962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.821136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.821153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.821340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.821405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.821562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.821604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.821835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.821878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.822148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.822204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.822454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.822494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.822804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.822846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.823068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.823110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.823342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.823450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.823698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.823752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.823942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.823992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.824250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.824295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.824468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.824512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.824673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.824689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.824795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.824810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.824896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.824910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.825115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.825158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.825373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.825432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.825645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.825687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.825845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.825887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.826183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.826225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.826509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.826553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.826711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.826753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.827078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.827120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.827361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.827383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.827559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.827575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.827698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.827740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.827901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.725 [2024-07-12 11:44:27.827942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.725 qpair failed and we were unable to recover it. 00:38:41.725 [2024-07-12 11:44:27.828095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.828138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.828270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.828291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.828536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.828579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.828825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.828867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.829211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.829254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.829551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.829594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.829810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.829851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.830127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.830168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.830334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.830408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.830616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.830632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.830727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.830741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.830837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.830853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.830963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.830977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.831231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.831287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.831504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.831547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.831846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.831888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.832250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.832291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.832550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.832595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.832834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.832876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.833093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.833134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.833370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.833446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.833729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.833745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.833859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.833875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.834106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.834122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.834217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.834230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.834491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.834535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.834758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.834801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.834961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.835002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.835264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.835308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.835616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.835660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.835799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.835817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.836048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.836063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.836160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.836174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.836250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.836291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.836521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.836565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.836723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.836765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.836925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.836967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.726 [2024-07-12 11:44:27.837274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.726 [2024-07-12 11:44:27.837329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.726 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.837503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.837519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.837697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.837739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.837909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.837951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.838117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.838158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.838372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.838428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.838707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.838748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.838973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.839015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.839232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.839273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.839493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.839537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.839800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.839816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.839978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.840020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.840232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.840274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.840479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.840496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.840673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.840714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.840946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.840988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.841210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.841257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.841417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.841433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.841558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.841600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.841857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.841898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.842120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.842161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.842428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.842444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.842546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.842562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.842652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.842669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.842779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.842795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.842954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.842997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.843245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.843286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.843595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.843639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.843901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.843955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.844246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.844287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.844493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.844537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.844759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.844801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.845011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.845053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.845320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.845361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.845596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.845638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.845872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.845914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.846127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.846169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.846364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.846386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.846517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.846560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.846761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.846803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.847046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.847088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.847389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.847426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.848509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.848544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.727 qpair failed and we were unable to recover it. 00:38:41.727 [2024-07-12 11:44:27.848763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.727 [2024-07-12 11:44:27.848779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.849000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.849043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.849252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.849294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.849578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.849621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.849845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.849886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.850061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.850104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.850236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.850251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.850456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.850499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.850673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.850715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.850946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.850988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.851265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.851307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.851520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.851537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.851730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.851772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.851964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.852005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.852218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.852259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.852476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.852492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.852601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.852617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.852781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.852798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.852941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.852957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.853243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.853284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.853499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.853549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.853767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.853809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.854025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.854066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.854330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.854371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.854648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.854664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.854775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.854790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.854887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.854904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.855096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.855112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.855273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.855289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.855486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.855528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.855729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.855770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.855985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.856026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.856346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.856404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.856532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.856548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.856793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.856836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.857017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.857058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.857364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.857422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.857616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.857632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.857804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.857820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.857959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.858000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.858275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.858315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.858605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.858649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.728 [2024-07-12 11:44:27.858917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.728 [2024-07-12 11:44:27.858959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.728 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.859268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.859310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.859558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.859601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.859761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.859803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.859980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.860022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.860310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.860351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.860607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.860625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.860730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.860757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.861012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.861053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.861370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.861426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.861657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.861673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.861788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.861804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.861910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.861926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.862145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.862160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.862335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.862351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.862520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.862562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.862783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.862825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.863108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.863150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.863363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.863435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.863600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.863616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.863785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.863827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.863994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.864035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.864329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.864371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.864555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.864598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.864819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.864862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.865013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.865055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.865283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.865338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.865557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.865574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.865688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.865727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.865937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.865978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.866219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.866261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.866463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.866479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.866600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.866630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.866899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.866940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.867190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.867233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.867512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.867528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.867624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.867638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.867820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.867861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.868175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.868217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.868365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.868418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.868565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.868606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.868822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.868864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.869040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.869082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.729 qpair failed and we were unable to recover it. 00:38:41.729 [2024-07-12 11:44:27.869280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.729 [2024-07-12 11:44:27.869321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.869535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.869577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.869766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.869782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.870007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.870050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.870206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.870247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.870489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.870539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.870635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.870649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.870836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.870879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.871139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.871180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.871493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.871542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.871653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.871670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.871781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.871797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.872009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.872026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.872185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.872241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.872512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.872564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.872741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.872789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.872985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.873027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.873245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.873287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.873550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.873567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.873715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.873732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.873908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.873950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.874191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.874233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.874401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.874446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.874720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.874736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.874949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.874965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.875184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.875201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.875353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.875369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.875517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.875535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.875631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.875651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.730 [2024-07-12 11:44:27.875804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.730 [2024-07-12 11:44:27.875820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.730 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.876034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.876077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.876373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.876433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.876574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.876590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.876753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.876768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.876915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.876931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.877229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.877271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.877565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.877615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.877738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.877753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.877854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.877868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.878046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.878062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.878174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.878217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.878516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.878559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.878707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.878723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.878974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.879016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.879304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.879347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.879528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.879571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.879791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.879832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.880076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.880118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.880317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.880362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.880690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.880733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.880880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.880922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.881216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.881259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.881423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.881467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.881690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.881733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.881982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.882023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.882220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.882268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.882413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.882456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.882674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.882715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.882943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.882960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.883215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.883262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.883491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.883535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.883759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.883813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.883918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.883934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.884189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.884206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.884357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.884374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.884523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.884540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.884768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.884785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.884884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.884898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.885010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.885025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.731 [2024-07-12 11:44:27.885318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.731 [2024-07-12 11:44:27.885361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.731 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.885596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.885640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.885868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.885911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.886193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.886236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.886483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.886500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.886728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.886770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.887081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.887124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.887360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.887414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.887656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.887699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.887876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.887894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.888106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.888149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.888466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.888510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.888751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.888768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.888865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.888880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.889076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.889119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.889294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.889336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.889585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.889629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.889823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.889840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.890049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.890091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.890398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.890443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.890732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.890748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.890876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.890918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.891257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.891300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.891528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.891588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.891725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.891767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.891958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.891974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.892960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.893001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.893218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.893234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.893528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.893574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.893800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.893842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.894085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.894128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.894411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.894461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.894562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.894578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.894749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.894791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.894957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.894999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.895239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.895282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.895490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.895534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.895757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.895799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.896057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.896102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.896426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.896473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.896691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.896733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.896897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.896941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.732 qpair failed and we were unable to recover it. 00:38:41.732 [2024-07-12 11:44:27.897155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.732 [2024-07-12 11:44:27.897197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.897525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.897576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.897668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.897683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.897828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.897868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.898138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.898181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.898469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.898514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.898706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.898722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.898815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.898829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.899113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.899157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.899376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.899431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.899688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.899705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.899843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.899885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.900238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.900281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.900573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.900617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.900862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.900903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.901232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.901274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.901605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.901651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.901861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.901904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.902231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.902274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.902481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.902525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.902689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.902745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.903025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.903068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.903248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.903291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.903516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.903559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.903770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.903789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.903907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.903948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.904250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.904293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.904547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.904591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.904891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.904939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.905138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.905155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.905324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.905366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.905527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.905549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.905717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.905733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.905844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.905860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.906029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.906072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.906224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.906267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.906516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.906569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.906740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.906756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.906877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.906919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.907152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.907194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.907364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.907465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.907612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.907655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.907870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.907923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.908159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.733 [2024-07-12 11:44:27.908176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.733 qpair failed and we were unable to recover it. 00:38:41.733 [2024-07-12 11:44:27.908359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.908375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.908609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.908631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.908880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.908920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.909090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.909132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.909413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.909468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.909687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.909704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.909823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.909839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.910011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.910054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.910282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.910325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.911010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.911042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.911220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.911236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.911354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.911369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.911582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.911600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.911700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.911715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.911889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.911906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.912001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.912016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.912144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.912159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.912328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.912345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.912553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.912572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.912676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.912690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.912817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.912837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.912924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.912939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.913108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.913124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.913271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.913288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.913535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.913552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.913652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.913669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.913786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.913803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.914081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.914097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.914311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.914329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.914570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.914587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.914758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.914775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.915017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.915034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.915292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.915309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.915428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.734 [2024-07-12 11:44:27.915443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.734 qpair failed and we were unable to recover it. 00:38:41.734 [2024-07-12 11:44:27.915603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.915621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.915873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.915890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.916087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.916104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.916274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.916290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.916465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.916482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.916647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.916664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.916780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.916797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.916962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.916979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.917157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.917175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.917336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.917353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.917521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.917539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.917718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.917734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.917895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.917912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.918011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.918028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.918251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.918268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.918451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.918469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.918683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.918700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.918846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.918864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.918968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.918985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.919226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.919243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.919422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.919440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.919541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.919556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.919707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.919724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.919893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.919909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.920077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.920094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.920286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.920303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.920466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.920490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.920607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.920624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.920712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.920727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.920848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.920866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.920968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.920985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.921071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.921091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.921311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.921329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.921441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.921459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.921568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.921586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.921726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.921743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.921902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.735 [2024-07-12 11:44:27.921919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.735 qpair failed and we were unable to recover it. 00:38:41.735 [2024-07-12 11:44:27.922018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.922032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.922129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.922144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.922233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.922248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.922409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.922427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.922611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.922629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.922791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.922808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.922910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.922928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.923104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.923120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.923340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.923357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.923457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.923473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.923627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.923645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.923820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.923836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.924041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.924058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.924176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.924219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.924403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.924447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.924665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.924714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.924825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.924841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.925016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.925033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.925200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.925216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.925482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.925499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.925671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.925688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.925782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.925800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.925896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.925911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.926135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.926151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.926324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.926341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.926520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.926537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.926710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.926727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.926836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.926853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.926953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.926970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.927134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.927153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.927404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.927420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.927589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.927607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.927767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.927784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.927887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.927906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.928021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.928037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.928255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.928272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.928434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.928452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.928619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.928636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.928800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.928818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.929033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.929050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.929228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.929245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.929463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.929480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.736 qpair failed and we were unable to recover it. 00:38:41.736 [2024-07-12 11:44:27.929642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.736 [2024-07-12 11:44:27.929659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.929860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.929880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.930144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.930160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.930390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.930408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.930647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.930664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.930788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.930804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.930963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.930979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.931244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.931262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.931509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.931527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.931682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.931698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.931849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.931867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.932137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.932154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.932337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.932354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.932528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.932546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.932700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.932717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.932977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.932996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.933241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.933264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.933433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.933451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.933692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.933709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.933878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.933895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.934207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.934223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.934425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.934442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.934610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.934626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.934784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.934800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.934962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.934979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.935146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.935163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.935402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.935419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.935596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.935616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.935710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.935724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.935826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.935841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.936055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.936071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.936289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.936305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.936568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.936585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.936751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.936767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.936857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.936872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.937038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.937055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.937226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.937244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.937392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.937409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.937555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.937572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.937789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.937806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.937918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.937935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.938174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.938191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.938451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.737 [2024-07-12 11:44:27.938469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.737 qpair failed and we were unable to recover it. 00:38:41.737 [2024-07-12 11:44:27.938575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.938591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.938698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.938715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.938865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.938883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.939130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.939147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.939402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.939419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.939528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.939545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.939645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.939659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.939807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.939823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.940080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.940097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.940191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.940207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.940370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.940406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.940610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.940661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.941010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.941060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.941361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.941421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.941690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.941715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.941936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.941958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.942154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.942175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.942334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.942355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.942539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.942561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.942739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.942760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.942951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.942972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.943213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.943235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.943363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.943393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.943648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.943669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.943883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.943904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.944026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.944047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.944289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.944310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.944516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.944538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.944814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.944834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.945089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.945109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.945305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.945327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.945518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.945540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.945630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.945662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.945883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.945904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.946059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.946080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.738 [2024-07-12 11:44:27.946373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.738 [2024-07-12 11:44:27.946400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.738 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.946557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.946578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.946820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.946841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.947072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.947093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.947315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.947337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.947612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.947633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.947748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.947768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.947944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.947965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.948179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.948200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.948448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.948470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.948641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.948662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.948895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.948915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.949115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.949136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.949302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.949324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.949419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.949438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.949539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.949560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.949735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.949759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.949939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.949960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.950067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.950088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.950359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.950385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.950489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.950510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.950763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.950784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.951028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.951049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.951218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.951238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.951426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.951447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.951711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.951733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.951910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.951939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.952162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.952182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.952410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.952432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.952600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.952621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.952870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.952890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.953131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.953153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.953334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.953356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.953626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.953648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.953870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.953891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.954045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.954066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.954255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.954276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.954541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.954562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.954827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.954849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.954949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.954968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.955194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.955215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.955392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.739 [2024-07-12 11:44:27.955413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.739 qpair failed and we were unable to recover it. 00:38:41.739 [2024-07-12 11:44:27.955585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.955607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.955768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.955789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.956014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.956035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.956288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.956309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.956464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.956485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.956647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.956667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.956856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.956877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.957152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.957173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.957292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.957313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.957478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.957499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.957680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.957701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.957801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.957820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.958003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.958024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.958242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.958262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.958406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.958431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.958537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.958558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.958740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.958760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.958958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.958978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.959198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.959219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.959403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.959425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.959645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.959667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.959770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.959788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.959962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.959983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.960169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.960190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.960361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.960394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.960568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.960588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.960698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.960721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.960835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.960858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.961014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.961035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.961238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.961259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.961467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.961488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.961661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.961681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.961905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.961925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.962086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.962106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.962213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.962233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.962458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.962479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.962649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.962670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.962788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.962807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.962974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.962995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.963274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.963295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.963453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.963475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.963701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.963721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.963889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.740 [2024-07-12 11:44:27.963910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.740 qpair failed and we were unable to recover it. 00:38:41.740 [2024-07-12 11:44:27.964116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.964138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.964389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.964411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.964583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.964603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.964725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.964746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.964927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.964947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.965061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.965095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.965267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.965288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.965402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.965423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.965664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.965685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.965795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.965816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.965901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.965920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.966039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.966063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.966318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.966338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.966513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.966535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.966709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.966731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.966925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.966946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.967112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.967133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.967296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.967317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.967471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.967492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.967646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.967666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.967835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.967856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.968078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.968100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.968266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.968287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.968409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.968432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.968628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.968650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.968808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.968829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.968987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.969008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.969242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.969263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.969369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.969397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.969483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.969502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.969683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.969703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.969949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.969970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.970090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.970111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.970273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.970295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.970390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.970411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.970585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.970606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.970714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.970735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.970907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.970928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.971103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.971124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.971236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.971257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.971411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.971433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.971655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.971676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.971834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.741 [2024-07-12 11:44:27.971855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.741 qpair failed and we were unable to recover it. 00:38:41.741 [2024-07-12 11:44:27.972016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.972036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.972200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.972221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.972327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.972347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.972462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.972484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.972610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.972632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.972811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.972832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.972936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.972956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.973079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.973100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.973200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.973224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.973316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.973334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.973519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.973542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.973648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.973669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.973848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.973869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.974094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.974116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.974277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.974298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.974392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.974412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.974564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.974585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.974688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.974709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.974872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.974893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.975135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.975155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.975251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.975272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.975530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.975552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.975672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.975694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.975855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.975875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.975979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.976000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.976179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.976201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.976431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.976461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.976634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.976655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.976822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.976843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.976954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.976975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.977129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.977150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.977303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.977324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.977407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.977427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.977693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.977714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.977820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.977841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.742 [2024-07-12 11:44:27.977945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.742 [2024-07-12 11:44:27.977965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.742 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.978153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.978174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.978343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.978363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.978464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.978483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.978590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.978611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.978769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.978790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.978977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.978999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.979120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.979141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.979252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.979273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.979363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.979393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.979486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.979506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.979690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.979711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.979937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.979957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.980111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.980135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.980267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.980288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.980458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.980480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.980648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.980669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.980767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.980789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.980891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.980911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.981093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.981114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.981196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.981215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.981367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.981398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.981488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.981508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.981661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.981682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.981856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.981876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.981977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.981998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.982197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.982218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.982308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.982328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.982499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.982521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.982601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.982620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.982710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.982731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.982923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.982944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.983095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.983116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.983217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.983240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.983351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.983371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.983551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.983572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.983655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.743 [2024-07-12 11:44:27.983674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.743 qpair failed and we were unable to recover it. 00:38:41.743 [2024-07-12 11:44:27.983846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.983866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.983963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.983983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.984190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.984211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.984416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.984438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.984730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.984751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.984933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.984954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.985063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.985084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.985183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.985203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.985455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.985476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.985597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.985617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.985783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.985804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.985901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.985921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.986077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.986097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.986324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.986344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.986549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.986571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.986667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.986688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.986855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.986878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.986985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.987005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.987118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.987146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.987246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.987266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.987421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.987443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.987546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.987567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.987737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.987758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.987839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.987858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.988029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.988051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.988164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.988185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.988354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.988375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.988560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.988582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.988811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.988832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.988998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.989019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.989123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.989144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.989240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.989260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.989411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.989432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.989591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.989612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.989790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.989811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.990048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.990068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.990169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.990190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.990295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.990316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.990437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.990458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.990550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.990570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.744 qpair failed and we were unable to recover it. 00:38:41.744 [2024-07-12 11:44:27.990686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.744 [2024-07-12 11:44:27.990707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.990872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.990892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.991074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.991095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.991337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.991357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.991451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.991473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.991629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.991649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.991768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.991789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.991936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.991957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.992121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.992142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.992313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.992333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.992555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.992576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.992659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.992677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.992850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.992870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.993042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.993063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.993173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.993194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.993274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.993293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.993461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.993486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.993640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.993661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.993846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.993866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.993970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.993991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.994094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.994115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.994213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.994233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.994321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.994342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.994548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.994569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.994720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.994741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.994900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.994921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.995144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.995260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.995280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.995559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.995579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.995670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.995690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.995799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.995819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.995914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.995935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.996104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.996125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.996223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.996244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.996351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.996371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.996466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.996486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.996583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.996603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.996706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.996727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.996875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.996896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.997061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.997082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.997232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.997252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.997354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.997375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.997467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.745 [2024-07-12 11:44:27.997513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.745 qpair failed and we were unable to recover it. 00:38:41.745 [2024-07-12 11:44:27.997638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d280 is same with the state(5) to be set 00:38:41.745 [2024-07-12 11:44:27.997871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.997910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.998016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.998047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.998242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.998287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.998409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.998434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.998633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.998657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.998759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.998776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.998871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.998887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.998970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.998986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.999140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.999156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.999313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.999330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.999548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.999565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.999707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.999723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.999901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:27.999917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:27.999993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.000008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.000104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.000120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.000249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.000275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.000475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.000499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.000676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.000702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.000872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.000890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.000971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.000986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.001946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.001974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.002053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.002066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.002127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.002140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.002341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.002358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.002586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.002625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.002840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.002857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.002956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.002972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.003130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.003145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.003359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.003374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.003585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.003602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.003838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.003855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.004017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.004033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.004136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.004151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.004302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.746 [2024-07-12 11:44:28.004317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.746 qpair failed and we were unable to recover it. 00:38:41.746 [2024-07-12 11:44:28.004490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.004507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.004673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.004689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.004925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.004953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.005063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.005079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.005287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.005302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.005383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.005398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.005553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.005570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.005662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.005678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.005830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.005845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.005942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.005959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.006125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.006142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.006310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.006326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.006469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.006485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.006561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.006575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.006718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.006733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.006878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.006894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.007009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.007024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.007209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.007224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.007300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.007314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.007471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.007487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.007635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.007650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.007739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.007754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.007951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.007967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.008050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.008067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.008203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.008221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.008324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.008340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.008449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.008465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.008643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.008660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.008733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.008747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.008890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.008905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.009045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.009060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.009205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.009220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.009426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.009442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.009518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.747 [2024-07-12 11:44:28.009532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.747 qpair failed and we were unable to recover it. 00:38:41.747 [2024-07-12 11:44:28.009673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.009688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.009840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.009855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.009926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.009940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.010946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.010960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.011038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.011053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.011137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.011152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.011253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.011268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.011408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.011423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.011566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.011582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.011800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.011815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.011967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.011982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.012890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.012906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.013963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.013978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.014126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.014141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.014293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.014308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.014387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.014402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.014583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.014598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.014819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.014835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.014976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.014991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.015064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.015078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.748 [2024-07-12 11:44:28.015153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.748 [2024-07-12 11:44:28.015168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.748 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.015312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.015328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.015404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.015418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.015507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.015521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.015593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.015608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.015768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.015783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.015874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.015889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.015984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.016770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.016999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.017014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.017154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.017170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.017247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.017260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.017331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.017345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.017421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.017437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.017670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.017685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.017825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.017840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.017995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.018011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.018170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.018184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.018261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.018274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.018425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.018441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.018583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.018598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.018757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.018775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.018865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.018879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.019017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.019032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.019178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.019193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.019371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.019391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.019595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.019610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.019675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.019688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.019835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.019850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.019998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.020014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.020193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.020209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.020383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.020398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.020460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.020474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.020626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.020641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.020790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.020806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.020887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.020913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.021072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.021086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.749 [2024-07-12 11:44:28.021170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.749 [2024-07-12 11:44:28.021185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.749 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.021330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.021345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.021448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.021463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.021625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.021640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.021735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.021750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.021828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.021843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.022022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.022038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.022168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.022184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.022422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.022438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.022514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.022529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.022676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.022692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.022764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.022779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.022960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.022975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.023153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.023167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.023329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.023344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.023429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.023444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.023520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.023535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.023691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.023706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.023797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.023812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.023986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.024001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.024179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.024194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.024272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.024287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.024443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.024458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.024622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.024637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.024792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.024810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.024894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.024909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.024992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.025005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.025183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.025198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.025364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.025385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.025469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.025484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.025644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.025660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.025730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.025743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.025883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.025898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.026117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.026132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.026340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.026355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.026448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.026464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.026629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.026644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.026802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.026817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.026908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.026923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.750 [2024-07-12 11:44:28.027014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.750 [2024-07-12 11:44:28.027029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.750 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.027121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.027136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.027270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.027285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.027424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.027439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.027526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.027542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.027704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.027719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.027807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.027821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.028881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.028897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.029905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.029925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.030077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.030297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.030462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.030544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.030648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.030801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.030916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.030998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.031014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.031091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.031105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.031254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.751 [2024-07-12 11:44:28.031269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.751 qpair failed and we were unable to recover it. 00:38:41.751 [2024-07-12 11:44:28.031473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.031488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.031678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.031693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.031760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.031774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.031840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.031853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.031991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.032078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.032178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.032263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.032435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.032679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.032778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.032931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.032946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.033965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.033980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.034144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.034158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.034298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.034313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.034405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.034421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.034491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.034507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.034605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.034620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.034700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.034714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.034865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.034880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.035030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.035044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.035113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.035128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.035269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.035284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.035440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.035456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.035595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.035613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.035750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.035765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.035849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.035863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.036959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.036973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.752 qpair failed and we were unable to recover it. 00:38:41.752 [2024-07-12 11:44:28.037106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.752 [2024-07-12 11:44:28.037121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.037208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.037223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.037367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.037400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.037495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.037511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.037590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.037605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.037703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.037718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.037865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.037879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.037967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.037981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.038166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.038180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.038346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.038360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.038443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.038458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.038538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.038559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.038651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.038667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.038739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.038754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.038856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.038871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.039048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.039062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.039243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.039267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.039375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.039411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.039598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.039626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.039798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.039814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.039886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.039901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.040128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.040142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.040281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.040296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.040503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.040518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.040691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.040705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.040849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.040864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.041006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.041022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.041170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.041185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.041325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.041340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.041493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.041512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.041648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.041663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.041748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.041764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.041943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.041958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.042102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.042116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.042203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.042220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.042367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.042387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.042475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.042490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.042654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.042669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.042741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.042756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.042888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.042903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.043036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.043050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.043183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.753 [2024-07-12 11:44:28.043197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.753 qpair failed and we were unable to recover it. 00:38:41.753 [2024-07-12 11:44:28.043344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.043359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.043444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.043460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.043606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.043620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.043702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.043716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.043849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.043864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.044073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.044275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.044381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.044545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.044625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.044720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.044897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.044989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.045003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.045093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.045107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.045271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.045298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.045501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.045525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.045621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.045645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.045881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.045898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.046078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.046093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.046241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.046255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.046323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.046339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.046472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.046488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.046592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.046606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.046783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.046798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.046941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.046956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.047956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.047971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.048967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.048982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.754 [2024-07-12 11:44:28.049189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.754 [2024-07-12 11:44:28.049203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.754 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.049337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.049351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.049514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.049529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.049620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.049634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.049712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.049727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.049800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.049814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.049946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.049960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.050894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.050921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.051921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.051934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.052097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.052111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.052259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.052274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.052410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.052424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.052626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.052643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.052725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.052739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.052807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.052821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.052954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.052969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.053048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.053062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.053226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.053241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.755 [2024-07-12 11:44:28.053399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.755 [2024-07-12 11:44:28.053415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.755 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.053552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.053567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.053658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.053673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.053825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.053839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.053934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.053948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.054856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.054870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.055973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.055999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.056167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.056191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.056349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.056371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.056468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.056484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.056587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.056602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.056738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.056753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.056839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.056854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.056947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.056961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.057959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.057978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.058056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.058070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.058153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.058168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.058265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.058280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.058416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.058431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.058508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.756 [2024-07-12 11:44:28.058522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.756 qpair failed and we were unable to recover it. 00:38:41.756 [2024-07-12 11:44:28.058674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.757 [2024-07-12 11:44:28.058688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.757 qpair failed and we were unable to recover it. 00:38:41.757 [2024-07-12 11:44:28.058760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.757 [2024-07-12 11:44:28.058774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.757 qpair failed and we were unable to recover it. 00:38:41.757 [2024-07-12 11:44:28.058890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.757 [2024-07-12 11:44:28.058905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.757 qpair failed and we were unable to recover it. 00:38:41.757 [2024-07-12 11:44:28.058975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:41.757 [2024-07-12 11:44:28.058988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:41.757 qpair failed and we were unable to recover it. 00:38:42.041 [2024-07-12 11:44:28.059082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.059097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.059168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.059184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.059267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.059282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.059430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.059447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.059543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.059557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.059691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.059706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.059855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.059869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.060885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.060898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.061049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.061073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.061184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.061205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.061288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.061308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.061458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.061479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.061565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.061585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.061733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.061753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.061850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.061870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.062082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.062103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.062342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.062362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.062456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.062473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.062566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.062581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.062731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.062746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.062845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.062860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.062991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.063009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.063179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.063193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.063371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.063391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.063469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.063483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.063622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.063636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.063772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.063787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.063925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.063939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.064023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.064038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.064116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.064130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.064279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.064293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.064360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.064374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.064454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.064467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.042 qpair failed and we were unable to recover it. 00:38:42.042 [2024-07-12 11:44:28.064628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.042 [2024-07-12 11:44:28.064643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.064725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.064740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.064873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.064888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.064974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.064989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.065076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.065090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.065177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.065191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.065334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.065349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.065437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.065608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.065622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.065774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.065789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.065923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.065937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.066955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.066972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.067897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.067916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.068082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.068096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.068167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.068180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.068324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.068339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.068485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.068500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.068572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.068586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.068786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.069040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.069055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.069140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.069155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.069298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.069312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.069396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.069412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.069550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.069564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.069636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.069651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.043 [2024-07-12 11:44:28.069801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.043 [2024-07-12 11:44:28.069816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.043 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.069958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.069973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.070194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.070304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.070471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.070579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.070735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.070833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.070917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.070993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.071155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.071238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.071323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.071485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.071632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.071855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.071954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.071970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.072124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.072139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.072373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.072393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.072562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.072578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.072679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.072693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.072840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.072855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.072949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.072963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.073179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.073194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.073364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.073388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.073481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.073496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.073641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.073656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.073726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.073741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.073832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.073847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.073923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.073938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.074011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.074025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.074159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.074174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.074343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.074358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.074461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.074476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.074569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.074584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.074653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.074667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.074883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.074897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.075029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.075044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.075115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.075130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.075200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.075214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.075309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.044 [2024-07-12 11:44:28.075324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.044 qpair failed and we were unable to recover it. 00:38:42.044 [2024-07-12 11:44:28.075391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.075405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.075474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.075488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.075648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.075662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.075740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.075756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.075843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.075857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.076943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.076957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.077029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.077042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.077195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.077212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.077355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.077369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.077529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.077544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.077630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.077645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.077795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.077810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.077886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.077900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.078101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.078115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.078330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.078344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.078508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.078523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.078606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.078619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.078692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.078706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.078903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.078918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.079123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.079138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.079286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.079301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.079387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.079403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.079537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.079552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.079630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.079645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.079802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.079816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.079948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.079962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.080057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.080229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.080328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.080497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.080603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.080688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.045 [2024-07-12 11:44:28.080832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.045 qpair failed and we were unable to recover it. 00:38:42.045 [2024-07-12 11:44:28.080913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.080928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.081075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.081090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.081149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.081163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.081297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.081312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.081539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.081554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.081627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.081642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.081784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.081799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.081933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.081948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.082952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.082966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.083974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.083989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.084132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.084146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.084294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.084309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.084444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.084458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.084533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.084548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.084646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.084661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.084814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.084828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.084895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.084909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.085070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.085084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.046 qpair failed and we were unable to recover it. 00:38:42.046 [2024-07-12 11:44:28.085222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.046 [2024-07-12 11:44:28.085239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.085327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.085346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.085418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.085432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.085525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.085539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.085613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.085627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.085875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.085890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.086089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.086186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.086410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.086563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.086724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.086822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.086921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.086988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.087955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.087970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.088061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.088145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.088384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.088562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.088714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.088829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.088926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.088999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.089930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.089998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.090011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.090146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.090161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.090249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.090263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.090397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.047 [2024-07-12 11:44:28.090412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.047 qpair failed and we were unable to recover it. 00:38:42.047 [2024-07-12 11:44:28.090492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.090507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.090580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.090594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.090739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.090754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.090890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.090904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.090984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.090998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.091902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.091915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.092945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.092959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.093104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.093120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.093286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.093305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.093511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.093527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.093605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.093619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.093778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.093793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.093876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.093890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.093974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.093989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.094125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.094139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.094227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.094242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.094343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.094357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.094447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.094462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.094596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.094610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.094705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.094720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.094920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.094935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.095071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.095086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.095230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.095245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.095395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.095410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.095552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.048 [2024-07-12 11:44:28.095567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.048 qpair failed and we were unable to recover it. 00:38:42.048 [2024-07-12 11:44:28.095651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.095665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.095798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.095812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.096074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.096089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.096245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.096259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.096395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.096410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.096496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.096511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.096658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.096673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.096834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.096849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.096926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.096942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.097170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.097185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.097268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.097281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.097349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.097364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.097583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.097606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.097704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.097724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.097935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.097955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.098131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.098151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.098250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.098270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.098427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.098449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.098683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.098703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.098803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.098823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.098969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.098993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.099157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.099174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.099357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.099372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.099511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.099526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.099730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.099744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.099844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.099858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.099928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.099943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.100970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.100985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.101050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.101065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.101209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.101224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.101326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.101340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.101421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.101435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.101621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.101638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.049 [2024-07-12 11:44:28.101724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.049 [2024-07-12 11:44:28.101738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.049 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.101896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.101910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.102054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.102068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.102279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.102293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.102374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.102394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.102666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.102680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.102833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.102848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.102924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.102938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.103961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.103976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.104051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.104070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.104131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.104144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.104275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.104289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.104445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.104460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.104552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.104569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.104717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.104732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.104868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.104883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.105042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.105057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.105134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.105150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.105326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.105340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.105425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.105440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.105659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.105674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.105811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.105827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.105961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.105976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.106134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.106150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.106306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.106320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.106402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.106417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.106578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.106593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.106662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.050 [2024-07-12 11:44:28.106678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.050 qpair failed and we were unable to recover it. 00:38:42.050 [2024-07-12 11:44:28.106842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.106856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.107869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.107883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.108024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.108038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.108121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.108135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.108284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.108306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.108467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.108487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.108690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.108709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.108805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.108825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.108973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.108993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.109911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.109925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.110958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.110973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.111084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.111300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.111454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.111532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.111623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.111782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.051 [2024-07-12 11:44:28.111865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.051 qpair failed and we were unable to recover it. 00:38:42.051 [2024-07-12 11:44:28.111946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.111960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.112043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.112136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.112304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.112455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.112555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.112724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.112832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.112987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.113159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.113260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.113370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.113550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.113674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.113788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.113890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.113909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.114890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.114905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.115918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.115932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.052 [2024-07-12 11:44:28.116735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.052 [2024-07-12 11:44:28.116754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.052 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.116928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.116947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.117956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.117970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.118947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.118961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.119952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.119966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.120039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.120053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.120139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.120154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.120297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.120311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.120519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.120534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.120603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.120616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.120759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.120773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.120856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.120871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.053 qpair failed and we were unable to recover it. 00:38:42.053 [2024-07-12 11:44:28.121782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.053 [2024-07-12 11:44:28.121796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.121944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.121958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.122830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.122980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.123977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.123990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.124056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.124070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.124206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.124221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.124358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.124373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.124458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.124472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.124609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.124626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.124696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.124709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.124844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.124859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.125967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.125982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.054 qpair failed and we were unable to recover it. 00:38:42.054 [2024-07-12 11:44:28.126066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.054 [2024-07-12 11:44:28.126080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.126902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.126917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.127925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.127948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.128978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.128993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.129928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.129942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.130103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.130213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.130319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.130416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.130507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.130605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.055 [2024-07-12 11:44:28.130761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.055 qpair failed and we were unable to recover it. 00:38:42.055 [2024-07-12 11:44:28.130829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.130843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.131920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.131935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.132086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.132100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.132250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.132265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.132403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.132418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.132572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.132586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.132660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.132675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.132846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.132870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.132965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.132986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.133140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.133161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.133406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.133421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.133565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.133579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.133784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.133798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.133930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.133944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.134837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.134851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.135984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.135998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.136138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.136152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.136239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.136254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.056 [2024-07-12 11:44:28.136393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.056 [2024-07-12 11:44:28.136408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.056 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.136567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.136582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.136729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.136743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.136911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.136925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.137792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.137996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.138010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.138143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.138158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.138231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.138250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.138429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.138443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.138538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.138553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.138698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.138712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.138922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.138936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.139943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.139958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.140097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.140111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.140246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.140261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.140413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.140428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.140502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.140516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.140748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.140767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.140862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.140877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.141035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.141049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.141250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.141265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.141400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.141415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.141502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.141516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.141603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.141617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.141795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.141810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.141944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.141959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.057 [2024-07-12 11:44:28.142188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.057 [2024-07-12 11:44:28.142202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.057 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.142290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.142304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.142467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.142482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.142543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.142555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.142726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.142741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.142919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.142934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.143094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.143193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.143267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.143352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.143455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.143677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.143838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.143988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.144005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.144206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.144220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.144306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.144320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.144409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.144424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.144572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.144587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.144749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.144763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.144901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.144915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.145115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.145130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.145204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.145219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.145310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.145324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.145413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.145428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.145557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.145572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.145723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.145737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.145821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.145836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.146848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.146861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.147004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.147019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.147221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.147235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.147403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.147418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.147551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.147565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.147654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.058 [2024-07-12 11:44:28.147669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.058 qpair failed and we were unable to recover it. 00:38:42.058 [2024-07-12 11:44:28.147820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.147835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.147967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.147981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.148134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.148149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.148230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.148245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.148314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.148329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.148427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.148442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.148533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.148547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.148751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.148765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.148838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.148853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.149023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.149173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.149323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.149479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.149647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.149753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.149837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.149995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.150009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.150078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.150092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.150174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.150188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.150397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.150412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.150582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.150596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.150758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.150773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.150860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.150874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.151981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.151996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.152087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.152101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.152237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.152252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.152406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.152421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.059 [2024-07-12 11:44:28.152560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.059 [2024-07-12 11:44:28.152574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.059 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.152650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.152664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.152826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.152841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.152989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.153136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.153300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.153396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.153490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.153588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.153735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.153907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.153922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.154969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.154986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.155132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.155147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.155234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.155248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.155342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.155357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.155506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.155520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.155608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.155622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.155776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.155791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.155870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.155885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.156018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.156048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.156128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.156142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.156375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.156396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.156561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.156576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.156662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.156676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.156776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.156791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.156873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.156888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.157127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.157142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.157212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.157226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.157404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.157419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.157492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.157506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.157648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.157662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.157801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.157816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.157961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.157975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.158061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.060 [2024-07-12 11:44:28.158075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.060 qpair failed and we were unable to recover it. 00:38:42.060 [2024-07-12 11:44:28.158143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.158157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.158333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.158347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.158528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.158543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.158713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.158736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.158898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.158913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.159057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.159071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.159221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.159235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.159408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.159422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.159576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.159590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.159733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.159747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.159829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.159844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.159977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.159992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.160142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.160156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.160226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.160240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.160319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.160333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.160493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.160507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.160667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.160681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.160815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.160831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.160903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.160917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.161855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.161870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.162856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.162871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.163010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.163025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.163109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.163123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.163195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.163209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.163407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.163422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.163497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.163511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.163594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.061 [2024-07-12 11:44:28.163609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.061 qpair failed and we were unable to recover it. 00:38:42.061 [2024-07-12 11:44:28.163676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.163689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.163759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.163774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.163938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.163952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.164914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.164928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.165974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.165989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.166949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.166964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.167864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.167878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.168009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.168024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.168089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.168102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.168232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.168247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.168330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.168344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.062 [2024-07-12 11:44:28.168417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.062 [2024-07-12 11:44:28.168431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.062 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.168513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.168528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.168682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.168697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.168839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.168853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.169953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.169966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.170210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.170225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.170295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.170309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.170386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.170401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.170479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.170494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.170627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.170644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.170779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.170793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.170886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.170900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.171868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.171883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.172051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.172065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.172201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.172215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.172431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.172445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.172535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.172550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.172751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.172766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.172914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.172929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.172998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.173013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.063 [2024-07-12 11:44:28.173172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.063 [2024-07-12 11:44:28.173187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.063 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.173326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.173340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.173486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.173502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.173753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.173767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.173925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.173939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.174037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.174052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.174259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.174273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.174418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.174433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.174519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.174534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.174617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.174631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.174769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.174784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.174849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.174862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.175923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.175938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.176098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.176112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.176362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.176380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.176454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.176470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.176621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.176636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.176717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.176732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.176886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.176901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.177042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.177057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.177159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.177173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.177332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.177346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.177496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.177512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.177653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.177667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.177801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.177815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.177971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.177985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.178129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.178144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.178212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.178225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.178435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.178450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.178535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.178549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.178688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.178702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.178855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.178870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.178954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.178969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.179039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.179054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.064 [2024-07-12 11:44:28.179211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.064 [2024-07-12 11:44:28.179226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.064 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.179369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.179387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.179516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.179531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.179681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.179696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.179838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.179852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.179932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.179947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.180949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.180968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.181783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.181801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.182916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.182985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.183154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.183257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.183342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.183501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.183605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.183767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.183934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.183949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.184093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.184107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.184185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.065 [2024-07-12 11:44:28.184199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.065 qpair failed and we were unable to recover it. 00:38:42.065 [2024-07-12 11:44:28.184332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.184346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.184435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.184450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.184518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.184533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.184617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.184632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.184724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.184743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.184824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.184839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.184978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.184992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.185977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.185991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.186135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.186150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.186355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.186369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.186471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.186486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.186620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.186634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.186764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.186779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.186862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.186879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.187906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.187921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.188980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.188994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.189145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.189159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.189226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.189240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.189326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.066 [2024-07-12 11:44:28.189341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.066 qpair failed and we were unable to recover it. 00:38:42.066 [2024-07-12 11:44:28.189490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.189505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.189721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.189735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.189869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.189883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.189961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.189981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.190064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.190079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.190166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.190180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.190359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.190387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.190556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.190580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.190727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.190748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.190841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.190856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.191888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.191902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.192934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.192948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.193894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.067 [2024-07-12 11:44:28.193909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.067 qpair failed and we were unable to recover it. 00:38:42.067 [2024-07-12 11:44:28.194054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.194970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.194984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.195118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.195132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.195306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.195328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.195549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.195574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.195662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.195683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.195850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.195866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.196971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.196985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.197936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.197952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.068 qpair failed and we were unable to recover it. 00:38:42.068 [2024-07-12 11:44:28.198929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.068 [2024-07-12 11:44:28.198944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.199880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.199894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.200945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.200964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.201951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.201965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.202049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.202064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.202152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.202167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.202388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.202403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.202481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.202496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.202631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.202645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.202799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.202815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.203025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.203044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.203259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.203273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.203407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.203422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.203632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.203646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.203715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.203729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.203877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.203892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.204049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.204064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.204202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.204369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.204387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.204478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.204493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.069 qpair failed and we were unable to recover it. 00:38:42.069 [2024-07-12 11:44:28.204574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.069 [2024-07-12 11:44:28.204588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.204672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.204686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.204838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.204852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.204951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.204965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.205109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.205123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.205287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.205301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.205442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.205458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.205610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.205624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.205824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.205839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.205917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.205931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.206854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.206996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.207966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.207979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.208851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.208865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.209002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.209017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.209224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.209238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.209400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.209422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.209509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.209524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.209605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.070 [2024-07-12 11:44:28.209620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.070 qpair failed and we were unable to recover it. 00:38:42.070 [2024-07-12 11:44:28.209817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.209831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.209898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.209912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.209997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.210849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.210990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.211004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.211111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.211125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.211347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.211366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.211504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.211519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.211719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.211733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.211830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.211845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.211998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.212012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.212219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.212233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.212368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.212387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.212539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.212553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.212715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.212730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.212819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.212834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.213001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.213016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.213244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.213258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.213407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.213422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.213644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.213659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.213749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.213764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.213842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.213856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.213931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.213946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.214979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.214993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.215081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.215095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.215167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.215181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.071 [2024-07-12 11:44:28.215263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.071 [2024-07-12 11:44:28.215277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.071 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.215439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.215454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.215592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.215607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.215693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.215708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.215863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.215877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.215950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.215965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.216101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.216115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.216258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.216272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.216424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.216438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.216593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.216610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.216694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.216708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.216861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.216875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.216944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.216960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.217941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.217955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.218879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.218893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.219125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.219139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.219202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.219215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.219365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.219383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.219483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.219498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.219567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.219582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.072 qpair failed and we were unable to recover it. 00:38:42.072 [2024-07-12 11:44:28.219672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.072 [2024-07-12 11:44:28.219686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.219839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.219855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.220009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.220028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.220250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.220265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.220342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.220356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.220519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.220539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.220629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.220644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.220788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.220803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.220959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.220973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.221929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.221943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.222891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.222905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.223109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.223123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.223190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.223203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.223336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.223350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.223502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.223517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.223595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.223608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.223808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.223823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.223972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.223986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.224957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.073 [2024-07-12 11:44:28.224971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.073 qpair failed and we were unable to recover it. 00:38:42.073 [2024-07-12 11:44:28.225110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.225124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.225265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.225280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.225380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.225397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.225542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.225557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.225642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.225656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.225790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.225804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.225895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.225910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.225987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.226064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.226213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.226435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.226589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.226740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.226830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.226981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.226995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.227145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.227159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.227227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.227242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.227382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.227396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.227536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.227550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.227634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.227649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.227721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.227735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.227938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.227952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.228971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.228986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.229133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.229147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.229281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.229295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.229459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.229474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.229565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.229580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.229719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.229734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.229869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.229884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.229955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.229968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.230107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.230121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.230189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.230203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.230301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.074 [2024-07-12 11:44:28.230316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.074 qpair failed and we were unable to recover it. 00:38:42.074 [2024-07-12 11:44:28.230468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.230483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.230558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.230572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.230726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.230740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.230818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.230832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.230930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.230945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.231972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.231987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.232943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.232957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.233942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.233957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.234871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.234885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.235036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.235049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.235182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.235197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.235269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.075 [2024-07-12 11:44:28.235283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.075 qpair failed and we were unable to recover it. 00:38:42.075 [2024-07-12 11:44:28.235374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.235394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.235481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.235495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.235562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.235575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.235653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.235668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.235769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.235784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.235991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.236941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.236955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.237981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.237995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.238129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.238144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.238277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.238292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.238368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.238387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.238525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.238540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.238623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.238638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.238872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.238887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.238966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.238980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.239114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.239128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.239212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.239227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.239362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.239381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.239518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.239533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.239696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.239711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.076 [2024-07-12 11:44:28.239794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.076 [2024-07-12 11:44:28.239808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.076 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.239941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.239956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.240018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.240031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.240115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.240130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.240280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.240307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.240524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.240547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.240701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.240721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.240894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.240914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.241079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.241100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.241245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.241271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.241424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.241440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.241682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.241697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.241777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.241791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.241955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.241970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.242170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.242185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.242339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.242353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.242441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.242456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.242604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.242621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.242700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.242715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.242925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.242940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.243030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.243045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.243200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.243214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.243287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.243302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.243510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.243528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.243624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.243638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.243845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.243860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.243998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.244012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.244149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.244302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.244317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.244476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.244491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.244574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.244588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.244658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.244673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.244852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.244866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.245111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.245125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.245268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.245283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.245414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.245429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.245563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.245578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.245675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.245690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.245823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.245837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.245994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.246008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.246085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.077 [2024-07-12 11:44:28.246099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.077 qpair failed and we were unable to recover it. 00:38:42.077 [2024-07-12 11:44:28.246311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.246325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.246495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.246512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.246671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.246690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.246765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.246780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.246949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.246963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.247170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.247185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.247359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.247421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.247558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.247598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.247720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.247760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.247893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.247933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.248059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.248099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.248297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.248337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.248626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.248668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.248874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.248914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.249190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.249231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.249365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.249461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.249649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.249696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.249980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.250019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.250212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.250252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.250538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.250580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.250786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.250826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.250961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.251001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.251193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.251233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.251423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.251438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.251608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.251649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.251771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.251810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.251947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.251987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.252118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.252159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.252408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.252422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.252508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.252523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.252704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.252744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.252892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.252932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.253192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.253233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.253490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.253505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.253737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.253778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.253917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.253957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.254209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.254249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.254460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.254502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.254654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.254694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.254912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.254953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.255229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.078 [2024-07-12 11:44:28.255269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.078 qpair failed and we were unable to recover it. 00:38:42.078 [2024-07-12 11:44:28.255408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.255449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.255648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.255689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.255872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.255886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.256048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.256090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.256227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.256267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.256410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.256452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.256592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.256632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.256782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.256823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.256944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.256985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.257194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.257234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.257362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.257436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.257652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.257692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.257813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.257827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.257996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.258035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.258226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.258266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.258467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.258516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.258709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.258724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.258921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.258935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.259888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.259935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.260056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.260096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.260354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.260416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.260617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.260632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.260844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.260885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.261019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.261059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.261316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.261356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.261560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.261603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.261818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.261858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.262138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.262178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.262330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.262344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.079 [2024-07-12 11:44:28.262418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.079 [2024-07-12 11:44:28.262434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.079 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.262576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.262617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.262756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.262797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.263000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.263040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.263228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.263268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.263543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.263585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.263847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.263862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.264019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.264033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.264096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.264109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.264348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.264388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.264580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.264620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.264891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.264932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.265146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.265186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.265315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.265356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.265574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.265615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.265743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.265784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.266049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.266090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.266317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.266357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.266630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.266690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.266840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.266855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.267015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.267055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.267239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.267280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.267468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.267508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.267717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.267732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.267909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.267925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.268014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.268052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.268203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.268244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.268395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.268437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.268642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.268682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.268875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.268915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.269114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.269156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.269358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.269420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.269637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.269695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.269778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.269792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.080 [2024-07-12 11:44:28.269934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.080 [2024-07-12 11:44:28.269949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.080 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.270011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.270024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.270166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.270180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.270403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.270418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.270591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.270605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.270696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.270709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.270858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.270873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.270949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.270962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.271042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.271055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.271209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.271255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.271441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.271482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.271678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.271720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.271911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.271925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.272009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.272063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.272199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.272240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.272375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.272426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.272612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.272653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.272804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.272845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.272979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.273072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.273301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.273341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.273612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.273654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.273919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.273959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.274095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.274135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.274323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.274364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.274511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.274558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.274786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.274826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.275014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.275055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.275261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.275302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.275505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.275548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.275774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.275815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.275966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.276007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.276286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.276327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.276524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.276539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.276696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.276736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.276863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.276904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.277049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.277090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.277371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.277423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.277574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.277589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.277863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.277903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.278034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.278075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.278211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.278252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.081 [2024-07-12 11:44:28.278459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.081 [2024-07-12 11:44:28.278500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.081 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.278685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.278699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.278841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.278856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.278989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.279005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.279165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.279206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.279352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.279403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.279606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.279647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.279837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.279851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.280056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.280070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.280305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.280346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.280627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.280669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.280867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.280907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.281133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.281173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.281398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.281439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.281558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.281573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.281721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.281735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.281937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.281951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.282103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.282117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.282253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.282267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.282498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.282543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.282677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.282717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.283000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.283041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.283297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.283337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.283489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.283537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.283750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.283765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.283944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.283984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.284190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.284231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.284436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.284481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.284569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.284582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.284767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.284807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.284999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.285040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.285239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.285278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.285396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.285411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.285493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.285506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.285676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.285717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.285908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.285948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.286137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.286177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.286369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.286423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.286566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.286607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.286902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.286943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.287214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.287254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.082 qpair failed and we were unable to recover it. 00:38:42.082 [2024-07-12 11:44:28.287400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.082 [2024-07-12 11:44:28.287462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.287689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.287729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.287863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.287877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.288011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.288233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.288273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.288465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.288506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.288696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.288743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.288909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.288924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.289149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.289163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.289313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.289330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.289477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.289491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.289578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.289591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.289761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.289775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.289871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.289911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.290043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.290083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.290272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.290312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.290459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.290475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.290621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.290635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.290812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.290852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.291134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.291175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.291397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.291439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.291569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.291610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.291883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.291898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.292102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.292117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.292255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.292269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.292353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.292366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.292523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.292538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.292770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.292810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.292934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.292974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.293256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.293296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.293613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.293656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.293853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.293867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.294117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.294157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.294347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.294398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.294546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.294587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.294871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.294924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.295073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.295114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.295395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.295437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.295670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.295685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.295827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.295841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.296016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.083 [2024-07-12 11:44:28.296056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.083 qpair failed and we were unable to recover it. 00:38:42.083 [2024-07-12 11:44:28.296198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.296238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.296407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.296449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.296567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.296586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.296657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.296670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.296788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.296829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.297030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.297071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.297284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.297324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.297599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.297614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.297702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.297717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.297968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.298008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.298275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.298316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.298599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.298614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.298715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.298730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.298809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.298822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.299028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.299068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.299212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.299252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.299443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.299485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.299681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.299722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.299851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.299892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.300079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.300119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.300371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.300435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.300507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.300519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.300609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.300622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.300693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.300753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.300942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.300982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.301179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.301220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.301409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.301450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.301652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.301692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.301878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.301919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.302110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.302152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.302406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.302447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.302645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.302659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.302798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.302812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.302949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.302994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.084 qpair failed and we were unable to recover it. 00:38:42.084 [2024-07-12 11:44:28.303198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.084 [2024-07-12 11:44:28.303239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.303368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.303421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.303555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.303570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.303722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.303737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.303875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.303889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.304098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.304139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.304324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.304364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.304602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.304643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.304894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.304934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.305196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.305238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.305410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.305453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.305658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.305698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.305898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.305913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.306920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.306960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.307239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.307540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.307582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.307716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.307757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.307954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.307994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.308244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.308284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.308470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.308511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.308716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.308730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.308821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.085 [2024-07-12 11:44:28.308834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.085 qpair failed and we were unable to recover it. 00:38:42.085 [2024-07-12 11:44:28.309033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.309047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.309300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.309314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.309474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.309517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.309748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.309788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.309944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.309989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.310141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.310156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.310226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.310239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.310325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.310338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.310469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.310483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.310721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.310762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.310951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.310991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.311120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.311160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.311436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.311479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.311705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.311745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.311939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.311979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.312166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.312207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.312439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.312497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.312759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.312800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.312925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.312966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.313112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.313152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.313296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.313336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.313568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.313609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.313745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.313786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.313991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.314007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.314166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.314185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.314328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.314344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.314491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.314506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.314650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.314690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.314881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.314921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.315056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.315096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.315228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.315267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.315451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.315493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.315679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.315720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.315965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.315981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.316137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.316178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.316364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.316413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.316614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.316655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.316727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.316740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.316906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.316960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.317108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.317148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.317430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.317462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.317557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.086 [2024-07-12 11:44:28.317570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.086 qpair failed and we were unable to recover it. 00:38:42.086 [2024-07-12 11:44:28.317699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.317714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.317803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.317817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.317964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.318012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.318199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.318239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.318422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.318464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.318771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.318786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.318869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.318882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.319010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.319024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.319223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.319237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.319468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.319483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.319565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.319578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.319763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.319804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.320003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.320043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.320230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.320270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.320586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.320628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.320821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.320862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.321022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.321036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.321242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.321283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.321572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.321614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.321803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.321843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.322064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.322104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.322311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.322352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.322551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.322591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.322852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.322898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.323128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.323169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.323376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.323429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.323629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.323644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.323830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.323844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.324004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.324044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.324323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.324363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.324566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.324581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.324665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.324679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.324769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.324782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.325003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.325044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.325239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.325279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.325489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.325530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.325624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.325638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.325781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.325795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.326009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.326023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.326156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.326171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.326248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.326262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.326348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.326361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.326443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.326457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.087 [2024-07-12 11:44:28.326597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.087 [2024-07-12 11:44:28.326637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.087 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.326826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.326866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.327107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.327148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.327292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.327349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.327553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.327594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.327838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.327852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.327938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.327978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.328140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.328181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.328368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.328416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.328559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.328574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.328726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.328781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.328987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.329027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.329312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.329353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.329552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.329593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.329863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.329904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.330151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.330166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.330316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.330331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.330466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.330481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.330678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.330692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.330839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.330880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.331098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.331145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.331334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.331375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.331571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.331611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.331753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.331769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.331983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.332025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.332159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.332199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.332473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.332514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.332668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.332683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.332765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.332778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.332986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.333001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.333217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.333232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.333310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.333322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.333485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.333500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.088 [2024-07-12 11:44:28.333655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.088 [2024-07-12 11:44:28.333669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.088 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.333768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.333823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.334031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.334072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.334260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.334300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.334552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.334594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.334734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.334774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.335042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.335082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.335339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.335389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.335696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.335710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.335919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.335959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.336093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.336133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.336360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.336411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.336616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.336656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.336908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.336949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.337075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.337115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.337257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.337297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.337571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.337613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.337843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.337883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.338152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.338166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.338343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.338357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.338521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.338563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.338749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.338790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.339078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.339120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.339244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.339284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.339416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.339458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.339741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.339782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.339993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.340033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.340226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.340272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.340426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.340467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.340697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.340738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.340881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.340895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.341045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.341059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.341137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.341175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.341438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.341481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.341606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.341657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.341847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.341888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.342120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.342134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.342335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.342350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.342431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.342444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.342597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.342611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.342743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.342757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.342903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.342917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.343052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.343066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.089 [2024-07-12 11:44:28.343227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.089 [2024-07-12 11:44:28.343267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.089 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.343545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.343588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.343807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.343847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.343986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.344027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.344156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.344196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.344463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.344504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.344698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.344739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.344853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.344867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.345068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.345082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.345228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.345242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.345475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.345516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.345802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.345889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.346104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.346128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.346217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.346236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.346418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.346439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.346590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.346611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.346772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.346792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.346888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.346905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.347072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.347087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.347233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.347274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.347475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.347517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.347776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.347816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.348072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.348112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.348298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.348339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.348551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.348598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.348731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.348771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.348902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.348943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.349133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.349174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.349429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.349470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.349567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.349580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.349745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.349785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.349928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.349968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.350102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.350142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.350327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.350367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.350658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.350699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.350907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.350947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.351149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.351190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.351323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.351363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.351599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.351641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.351900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.351948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.352094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.352109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.352186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.352199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.352423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.090 [2024-07-12 11:44:28.352438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.090 qpair failed and we were unable to recover it. 00:38:42.090 [2024-07-12 11:44:28.352571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.352586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.352798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.352812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.352945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.352960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.353113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.353127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.353203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.353236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.353435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.353476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.353674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.353715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.353973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.353988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.354168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.354183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.354268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.354281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.354482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.354497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.354575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.354589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.354668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.354682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.354831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.354871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.355137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.355177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.355403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.355443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.355703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.355743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.355979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.356021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.356158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.356210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.356355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.356407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.356674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.356715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.356926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.356943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.357167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.357208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.357347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.357407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.357679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.357693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.357860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.357901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.358123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.358164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.358351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.358404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.358597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.358637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.358786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.358801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.358880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.358893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.359033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.359047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.359268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.359310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.359508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.359550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.359774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.359814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.359965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.359979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.360055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.360069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.360234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.360274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.360476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.360520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.360645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.360686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.091 [2024-07-12 11:44:28.360873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.091 [2024-07-12 11:44:28.360888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.091 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.361043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.361083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.361211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.361250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.361473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.361515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.361747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.361787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.361919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.361934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.362006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.362019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.362168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.362182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.362313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.362328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.362495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.362536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.362726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.362767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.362991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.363037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.363190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.363205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.363268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.363281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.363412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.363426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.363575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.363590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.363799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.363813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.363946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.363960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.364056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.364069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.364283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.364324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.364474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.364516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.364793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.364839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.365036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.365050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.365280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.365321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.365540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.365582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.365800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.365815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.366024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.366065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.366195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.366235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.366367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.366418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.366697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.366746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.366847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.366861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.367874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.367915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.368129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.368169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.368387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.368430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.092 [2024-07-12 11:44:28.368597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.092 [2024-07-12 11:44:28.368637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.092 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.368923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.368942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.369045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.369086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.369221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.369261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.369484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.369526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.369747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.369762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.369907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.369947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.370159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.370199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.370351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.370407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.370599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.370639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.370940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.370980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.371241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.371282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.371580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.371621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.371870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.371900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.372110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.372125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.372340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.372355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.372455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.372469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.372612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.372627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.372761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.372775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.372906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.372921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.373074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.373093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.373214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.373255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.373446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.373488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.373717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.373758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.093 [2024-07-12 11:44:28.373849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.093 [2024-07-12 11:44:28.373862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.093 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.374011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.374026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.374157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.374172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.374319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.374333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.374485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.374500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.374653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.374693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.374809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.374850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.375105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.375145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.375396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.375437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.375651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.375692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.375899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.375949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.376099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.376113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.376182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.376196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.376341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.376355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.376511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.376552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.376757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.376796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.376944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.376984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.377173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.377214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.377518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.377565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.377832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.377874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.378024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.378065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.378196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.378237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.378506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.378549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.378725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.378741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.378915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.378955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.379091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.379134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.379265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.379307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.379517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.379561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.379837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.379852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.379923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.379937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.380073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.380088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.381 qpair failed and we were unable to recover it. 00:38:42.381 [2024-07-12 11:44:28.380235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.381 [2024-07-12 11:44:28.380253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.380424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.380472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.380634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.380677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.380934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.380950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.381041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.381054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.381198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.381244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.381405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.381459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.381588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.381630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.381766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.381806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.382096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.382138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.382339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.382412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.382539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.382579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.382710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.382751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.382965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.383006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.383214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.383255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.383440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.383483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.383616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.383656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.383850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.383891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.384016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.384030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.384197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.384234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.384460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.384502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.384755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.384795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.385047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.385088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.385236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.385277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.385426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.385468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.385719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.385760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.385951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.385991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.386124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.386164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.386360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.386409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.386551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.386591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.386723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.386763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.386910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.386949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.387161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.387175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.387255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.387268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.387418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.387432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.387594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.387609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.387829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.387870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.388011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.388052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.388190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.388233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.388450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.388492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.382 qpair failed and we were unable to recover it. 00:38:42.382 [2024-07-12 11:44:28.388628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.382 [2024-07-12 11:44:28.388667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.388864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.388879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.389038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.389079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.389275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.389316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.389481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.389523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.389728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.389773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.389916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.389956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.390149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.390189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.390465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.390507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.390628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.390668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.390922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.390963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.391186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.391226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.391451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.391492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.391682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.391722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.391858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.391899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.392096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.392136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.392365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.392416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.392623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.392664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.392781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.392796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.392945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.392960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.393044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.393056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.393265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.393306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.393471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.393513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.393701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.393742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.393939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.393979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.394109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.394149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.394282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.394322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.394546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.394588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.394841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.394882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.395088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.395103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.395261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.395301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.395504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.395546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.395768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.395815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.395984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.396000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.396253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.396272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.396417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.396432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.396519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.396532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.396688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.396728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.396848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.396889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.397032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.397072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.397289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.383 [2024-07-12 11:44:28.397304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.383 qpair failed and we were unable to recover it. 00:38:42.383 [2024-07-12 11:44:28.397473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.397488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.397559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.397572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.397724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.397764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.397897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.397937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.398121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.398161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.398446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.398488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.398637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.398678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.398965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.399005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.399206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.399247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.399442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.399484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.399612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.399653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.399858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.399898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.400015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.400055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.400268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.400309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.400598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.400641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.400895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.400935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.401123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.401163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.401367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.401428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.401578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.401619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.401748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.401762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.401895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.401909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.402041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.402055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.402284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.402325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.402487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.402528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.402718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.402758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.402917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.402958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.403155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.403194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.403400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.403441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.403641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.403681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.403964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.404005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.404196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.404236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.404500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.404549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.404754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.404794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.404992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.405032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.405231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.405246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.405418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.405433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.405650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.405691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.405886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.405926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.406113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.406153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.406271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.406310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.384 [2024-07-12 11:44:28.406511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.384 [2024-07-12 11:44:28.406553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.384 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.406778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.406819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.407012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.407052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.407239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.407279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.407480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.407522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.407654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.407694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.407826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.407866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.407998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.408038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.408261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.408275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.408375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.408427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.408549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.408589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.408753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.408793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.408985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.409025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.409184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.409225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.409428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.409469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.409749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.409795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.409971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.409987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.410966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.411217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.411259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.411397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.411438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.411657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.411697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.411884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.411924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.412130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.412170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.412301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.412316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.412547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.412600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.412722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.385 [2024-07-12 11:44:28.412762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.385 qpair failed and we were unable to recover it. 00:38:42.385 [2024-07-12 11:44:28.413043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.413083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.413240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.413280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.413500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.413542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.413770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.413784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.413936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.413976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.414102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.414141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.414351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.414401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.414622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.414669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.414811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.414826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.414886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.414900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.414980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.414993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.415126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.415139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.415219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.415231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.415363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.415384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.415594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.415634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.415834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.415873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.416008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.416047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.416266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.416280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.416453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.416468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.416618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.416657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.416793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.416832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.417033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.417073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.417292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.417307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.417491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.417533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.417688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.417728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.417938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.417979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.418112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.418151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.418279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.418318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.418582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.418625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.418781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.418796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.418943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.418984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.419281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.419321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.419557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.419598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.419791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.419832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.420032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.420083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.420227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.420242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.420394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.420437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.420619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.420659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.420879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.420926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.421116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.421157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.421459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.386 [2024-07-12 11:44:28.421501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.386 qpair failed and we were unable to recover it. 00:38:42.386 [2024-07-12 11:44:28.421755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.421795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.422010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.422049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.422203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.422218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.422465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.422479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.422610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.422624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.422825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.422840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.422989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.423004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.423186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.423238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.423443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.423485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.423788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.423829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.423973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.424013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.424232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.424247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.424494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.424509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.424685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.424725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.424878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.424917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.425147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.425187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.425308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.425347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.425500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.425541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.425676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.425715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.425925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.425965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.426265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.426306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.426507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.426549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.426779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.426820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.427102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.427143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.427341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.427392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.427668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.427708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.427903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.427918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.428121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.428135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.428222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.428235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.428367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.428386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.428467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.428480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.428710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.428750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.428869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.428909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.429123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.429163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.429306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.429346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.429571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.429612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.429796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.429836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.430085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.430130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.430349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.430363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.430467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.430481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.430645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.387 [2024-07-12 11:44:28.430674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.387 qpair failed and we were unable to recover it. 00:38:42.387 [2024-07-12 11:44:28.430897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.430937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.431184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.431224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.431411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.431454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.431593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.431632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.431889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.431904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.432001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.432015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.432267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.432308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.432461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.432503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.432742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.432783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.432921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.432959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.433093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.433107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.433202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.433215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.433360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.433410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.433618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.433659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.433839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.433853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.433989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.434003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.434152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.434192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.434406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.434446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.434652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.434693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.434823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.434863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.435054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.435094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.435403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.435445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.435647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.435687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.435999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.436039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.436294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.436334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.436541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.436583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.436784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.436824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.437020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.437061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.437236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.437250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.437361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.437387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.437524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.437544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.437626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.437638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.437788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.437801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.437943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.437957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.438024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.438038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.438194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.438233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.438436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.438483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.438680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.438718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.438863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.438878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.439031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.439072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.388 [2024-07-12 11:44:28.439223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.388 [2024-07-12 11:44:28.439262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.388 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.439536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.439578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.439703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.439743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.440045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.440085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.440375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.440394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.440533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.440548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.440615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.440627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.440824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.440838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.441040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.441054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.441200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.441214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.441361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.441410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.441690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.441729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.441872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.441911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.442113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.442154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.442352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.442403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.442546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.442586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.442785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.442825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.443096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.443137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.443346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.443396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.443661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.443702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.443901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.443942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.444083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.444122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.444259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.444273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.444418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.444461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.444690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.444730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.444924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.444965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.445069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.445083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.445183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.445196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.445277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.445314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.445479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.445522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.445710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.445750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.445947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.445961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.446051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.446064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.446222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.389 [2024-07-12 11:44:28.446237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.389 qpair failed and we were unable to recover it. 00:38:42.389 [2024-07-12 11:44:28.446342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.446390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.446583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.446624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.446824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.446870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.446964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.446977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.447057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.447070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.447147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.447160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.447242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.447255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.447398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.447412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.447578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.447593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.447804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.447844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.447997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.448037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.448172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.448224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.448433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.448448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.448586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.448600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.448675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.448687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.448822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.448837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.448978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.449018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.449222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.449261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.449408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.449449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.449718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.449759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.449875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.449927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.450133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.450184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.450320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.450333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.450541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.450582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.450836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.450876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.450995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.451034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.451163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.451178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.451392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.451433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.451636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.451677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.451937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.451978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.452104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.452144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.452267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.452306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.452585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.452628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.452834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.452874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.453096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.453110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.453243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.453257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.453511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.453526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.453728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.453742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.453892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.453906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.454059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.454074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.390 [2024-07-12 11:44:28.454159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.390 [2024-07-12 11:44:28.454172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.390 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.454312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.454326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.454507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.454524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.454672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.454685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.454838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.454852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.454917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.454931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.455083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.455097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.455167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.455180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.455316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.455329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.455513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.455527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.455609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.455621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.455885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.455900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.456960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.456973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.457944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.457957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.458088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.458131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.458269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.458311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.458524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.458568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.458667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.458682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.458763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.458776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.458913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.458929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.459032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.459046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.459210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.459225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.459299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.459313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.459465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.459480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.459569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.459582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.459652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.459666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.391 [2024-07-12 11:44:28.459747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.391 [2024-07-12 11:44:28.459760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.391 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.459910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.459928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.460093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.460255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.460345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.460477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.460633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.460714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.460790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.460990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.461966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.461981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.462113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.462127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.462264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.462279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.462505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.462520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.462617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.462631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.462779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.462794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.462929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.462943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.463868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.463888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.392 [2024-07-12 11:44:28.464943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.392 [2024-07-12 11:44:28.464960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.392 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.465962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.465975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.466915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.466987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.467081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.467249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.467403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.467511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.467658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.467761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.467906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.467921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.468121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.468135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.468211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.468225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.468375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.468403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.468518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.468544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.468735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.468757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.468844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.468859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.469065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.469080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.469215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.469229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.469363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.469391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.469624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.469639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.469794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.469808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.469884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.469897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.470096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.470111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.470198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.470213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.470284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.470298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.470449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.470467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.393 qpair failed and we were unable to recover it. 00:38:42.393 [2024-07-12 11:44:28.470534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.393 [2024-07-12 11:44:28.470547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.470626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.470640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.470778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.470793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.470937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.470959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.471029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.471042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.471115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.471129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.471331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.471345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.471422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.471435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.471610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.471625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.471708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.471722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.471876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.471890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.472843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.472857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.473943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.473965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.394 [2024-07-12 11:44:28.474954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.394 [2024-07-12 11:44:28.474968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.394 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.475041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.475054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.475258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.475273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.475420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.475435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.475564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.475579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.475656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.475673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.475754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.475768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.475835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.475848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.476962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.476978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.477954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.477968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.478039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.478053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.478116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.478131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.478355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.478370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.478552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.478567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.478803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.478818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.478914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.478935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.479128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.479237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.479410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.479579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.479660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.479767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.479926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.479997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.480011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.395 qpair failed and we were unable to recover it. 00:38:42.395 [2024-07-12 11:44:28.480096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.395 [2024-07-12 11:44:28.480111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.480192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.480206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.480272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.480285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.480422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.480437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.480577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.480593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.480677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.480691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.480828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.480843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.480980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.480996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.481146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.481160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.481299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.481314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.481459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.481473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.481564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.481578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.481649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.481664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.481734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.481747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.481880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.481895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.482059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.482209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.482355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.482519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.482615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.482702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.482787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.482986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.483983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.483997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.484059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.484076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.484155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.484169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.484322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.484336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.484424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.484438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.484639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.484654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.484784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.484798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.484893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.484908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.485045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.485058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.485128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.485142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.485284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.396 [2024-07-12 11:44:28.485298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.396 qpair failed and we were unable to recover it. 00:38:42.396 [2024-07-12 11:44:28.485522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.485537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.485685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.485699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.485789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.485804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.485968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.485983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.486132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.486147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.486213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.486228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.486314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.486328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.486416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.486430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.486514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.486528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.486616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.486630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.486778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.486792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.487925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.487997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.488985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.488999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.489142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.489157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.489232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.489248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.489312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.489325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.489422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.489436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.489509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.489523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.489686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.489700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.489853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.489867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.490008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.490022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.490104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.490119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.490186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.490201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.490285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.490299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.490384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.397 [2024-07-12 11:44:28.490398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.397 qpair failed and we were unable to recover it. 00:38:42.397 [2024-07-12 11:44:28.490547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.490562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.490637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.490652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.490731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.490745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.490881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.490895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.491985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.491999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.492919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.492936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.493929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.493993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.494006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.494088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.494102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.494170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.494184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.494250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.398 [2024-07-12 11:44:28.494265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.398 qpair failed and we were unable to recover it. 00:38:42.398 [2024-07-12 11:44:28.494402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.494417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.494554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.494568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.494634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.494648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.494867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.494881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.495084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.495098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.495164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.495178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.495382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.495396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.495467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.495480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.495645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.495659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.495896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.495915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.496068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.496082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.496161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.496175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.496325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.496339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.496519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.496534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.496737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.496751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.496908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.496922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.497896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.497911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.498972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.498986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.499105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.499197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.499357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.499512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.499749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.499836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.499915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.499992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.500006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.399 [2024-07-12 11:44:28.500088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.399 [2024-07-12 11:44:28.500102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.399 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.500185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.500200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.500419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.500433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.500580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.500595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.500796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.500811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.500979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.500993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.501168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.501182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.501256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.501271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.501354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.501368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.501448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.501463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.501546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.501560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.501649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.501663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.501893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.501907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.502000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.502015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.502241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.502256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.502354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.502368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.502621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.502636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.502786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.502801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.503873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.503888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.504978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.504996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.505079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.505095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.505229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.505243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.505318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.505333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.505413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.505427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.505488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.505501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.400 qpair failed and we were unable to recover it. 00:38:42.400 [2024-07-12 11:44:28.505569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.400 [2024-07-12 11:44:28.505581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.505723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.505737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.505882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.505896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.505992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.506006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.506210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.506224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.506302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.506317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.506402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.506416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.506575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.506589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.506745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.506758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.506899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.506913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.507970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.507984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.508060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.508074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.508212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.508226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.508328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.508342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.508618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.508632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.508783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.508797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.508929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.508943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.509114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.509129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.509211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.509225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.509309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.509323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.509547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.509561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.509762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.509776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.509846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.509861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.509991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.401 [2024-07-12 11:44:28.510834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.401 qpair failed and we were unable to recover it. 00:38:42.401 [2024-07-12 11:44:28.510907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.510921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.511931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.511944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.512895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.512918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.513936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.513950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.514055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.514069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.514210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.514224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.514304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.514318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.514477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.514492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.514697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.514712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.402 [2024-07-12 11:44:28.514784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.402 [2024-07-12 11:44:28.514798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.402 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.514892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.514906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.515955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.515970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.516908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.516922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.517064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.517079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.517213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.517227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.517361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.517375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.517468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.517483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.517585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.517599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.517678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.517691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.517838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.517853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.518077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.518091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.518160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.518172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.518326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.518341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.518488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.518502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.518648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.518662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.518811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.518826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.518959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.518973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.519909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.519923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.520000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.403 [2024-07-12 11:44:28.520014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.403 qpair failed and we were unable to recover it. 00:38:42.403 [2024-07-12 11:44:28.520081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.520095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.520195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.520209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.520346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.520366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.520566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.520580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.520735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.520749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.520837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.520852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.520996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.521976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.521990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.522127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.522142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.522226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.522240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.522460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.522475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.522689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.522703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.522858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.522873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.523005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.523019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.523186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.523200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.523297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.523312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.523402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.523418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.523494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.523509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.523710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.523725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.523804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.523818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.524020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.524034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.524198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.524213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.524298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.524325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.524438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.524460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.524556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.524579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.524863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.524878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.524961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.524975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.525174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.525188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.525266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.525280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.525457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.525472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.525558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.525572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.525663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.525677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.525757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.525771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.525924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.404 [2024-07-12 11:44:28.525938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.404 qpair failed and we were unable to recover it. 00:38:42.404 [2024-07-12 11:44:28.526111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.526844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.526995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.527907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.527921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.528017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.528172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.528270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.528413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.528557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.528664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.528834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.528973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.529012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.529300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.529324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.529418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.529440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.529541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.529564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.529660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.529676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.529880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.529894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.529970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.529983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.530956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.530970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.531043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.405 [2024-07-12 11:44:28.531056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.405 qpair failed and we were unable to recover it. 00:38:42.405 [2024-07-12 11:44:28.531194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.531211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.531355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.531369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.531443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.531458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.531540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.531554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.531636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.531650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.531780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.531794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.531939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.531954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.532967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.532982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.533202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.533215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.533292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.533306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.533441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.533455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.533543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.533558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.533707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.533720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.533881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.533893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.533973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.534144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.534155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.534224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.534236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.534395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.534408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.534561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.534573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.534631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.534642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.534829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.534851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.535074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.535093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.535340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.535360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.535511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.535525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.535593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.535606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.535765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.535778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.406 [2024-07-12 11:44:28.536964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.406 [2024-07-12 11:44:28.536979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.406 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.537056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.537070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.537203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.537217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.537298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.537311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.537460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.537475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.537564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.537578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.537725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.537739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.537903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.537917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.538965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.538979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.539960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.539974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.540049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.540068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.540227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.540242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.540376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.540396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.540595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.540609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.540680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.540695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.540757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.540771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.540979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.540993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.541077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.541091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.541296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.541309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.541412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.541427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.541509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.541524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.407 [2024-07-12 11:44:28.541693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.407 [2024-07-12 11:44:28.541707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.407 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.541860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.541874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.541941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.541956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.542090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.542104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.542257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.542271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.542336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.542353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.542489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.542504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.542668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.542682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.542835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.542849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.542988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.543075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.543220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.543433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.543527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.543623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.543781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.543872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.543886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.544927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.544989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.545003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.545147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.545162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.545313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.545327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.545528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.545542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.545756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.545770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.545909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.545924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.546143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.546157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.546292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.546314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.546408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.546430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.546534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.546553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.546709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.546728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.546891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.546911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.547126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.547146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.547233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.547249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.547330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.547344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.547503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.547518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.408 qpair failed and we were unable to recover it. 00:38:42.408 [2024-07-12 11:44:28.547586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.408 [2024-07-12 11:44:28.547598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.547727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.547751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.547884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.547898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.547977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.547992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.548155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.548171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.548303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.548318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.548565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.548579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.548725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.548741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.548835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.548848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.549011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.549026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.549196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.549210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.549412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.549428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.549562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.549577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.549724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.549738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.549814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.549829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.549962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.549976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.550176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.550190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.550329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.550344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.550494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.550514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.550585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.550600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.550699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.550713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.550794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.550808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.550945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.550959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.551113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.551128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.551264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.551278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.551347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.551362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.551577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.551592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.551726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.551740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.551890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.551904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.552051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.552065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.552133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.552147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.552323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.552348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.552605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.552631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.552720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.552742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.552854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.552870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.553099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.553113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.553185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.553199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.553358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.553372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.553548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.553562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.553660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.553674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.553808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.553823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.409 qpair failed and we were unable to recover it. 00:38:42.409 [2024-07-12 11:44:28.553906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.409 [2024-07-12 11:44:28.553920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.554945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.554959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.555089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.555102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.555192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.555206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.555349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.555363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.555465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.555479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.555618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.555633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.555706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.555720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.555935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.555950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.556799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.556813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.557945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.557959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.558920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.558934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.559002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.410 [2024-07-12 11:44:28.559015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.410 qpair failed and we were unable to recover it. 00:38:42.410 [2024-07-12 11:44:28.559104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.559269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.559374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.559483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.559638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.559790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.559894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.559978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.559991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.560126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.560140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.560216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.560228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.560433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.560448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.560699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.560713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.560915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.560929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.560996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.561099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.561191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.561344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.561512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.561682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.561796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.561964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.561978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.562114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.562129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.562230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.562245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.562319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.562333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.562425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.562442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.562529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.562543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.562754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.562768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.562915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.562929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.563028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.563042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.563128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.563145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.563216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.563230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.563363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.563382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.563532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.563547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.563613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.411 [2024-07-12 11:44:28.563626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.411 qpair failed and we were unable to recover it. 00:38:42.411 [2024-07-12 11:44:28.563762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.563777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.563929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.563943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.564970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.564984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.565069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.565082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.565165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.565178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.565331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.565346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.565519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.565535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.565686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.565700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.565829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.565843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.565998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.566012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.566161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.566175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.566342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.566357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.566450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.566465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.566598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.566613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.566693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.566706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.566866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.566881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.567021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.567036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.567168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.567183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.567350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.567364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.567458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.567473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.567621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.567636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.567789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.567804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.567903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.567918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.568003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.568017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.568101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.568116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.568264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.568279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.568439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.568459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.568610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.568625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.568764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.568780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.568918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.568933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.569085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.569099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.569166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.569178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.569428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.569443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.569548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.412 [2024-07-12 11:44:28.569561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.412 qpair failed and we were unable to recover it. 00:38:42.412 [2024-07-12 11:44:28.569694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.569709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.569776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.569789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.569951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.569965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.570118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.570132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.570330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.570343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.570488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.570503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.570593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.570607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.570679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.570693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.570833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.570847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.570952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.570966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.571953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.571967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.572869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.572883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.573088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.573101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.573237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.573250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.573395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.573411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.573476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.573489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.573558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.573572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.573716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.573731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.573937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.573952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.574089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.574106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.574172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.574185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.574318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.574332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.574503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.574518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.574596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.574610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.574752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.574766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.413 [2024-07-12 11:44:28.574857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.413 [2024-07-12 11:44:28.574872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.413 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.575022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.575037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.575175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.575191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.575335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.575349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.575495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.575510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.575722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.575737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.575872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.575885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.575974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.575989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.576971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.576986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.577129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.577148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.577226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.577240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.577397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.577413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.577559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.577573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.577725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.577739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.577903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.577917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.578921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.578934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.579925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.579940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.580024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.580038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.414 qpair failed and we were unable to recover it. 00:38:42.414 [2024-07-12 11:44:28.580189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.414 [2024-07-12 11:44:28.580204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.580271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.580286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.580429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.580445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.580522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.580537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.580618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.580632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.580713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.580727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.580862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.580878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.580951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.580965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.581038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.581051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.581284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.581300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.581435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.581450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.581529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.581543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.581610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.581623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.581757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.581771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.581909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.581923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.582977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.582992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.583967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.583983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.584063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.584077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.584161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.584178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.584246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.584259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.415 qpair failed and we were unable to recover it. 00:38:42.415 [2024-07-12 11:44:28.584403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.415 [2024-07-12 11:44:28.584418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.584513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.584527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.584660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.584673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.584751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.584766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.584830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.584848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.584932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.584947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.585976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.585990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.586201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.586215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.586297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.586312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.586382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.586395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.586508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.586523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.586606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.586620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.586721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.586735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.586886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.586900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.587844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.587859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.588015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.588029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.588099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.588111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.588249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.588263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.588398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.588412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.588558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.588572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.588659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.588673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.588812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.588826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.589027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.589042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.589192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.589206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.589280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.589297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.589387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.589401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.589555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.416 [2024-07-12 11:44:28.589568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.416 qpair failed and we were unable to recover it. 00:38:42.416 [2024-07-12 11:44:28.589712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.589727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.589822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.589837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.590056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.590069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.590137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.590150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.590295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.590309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.590522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.590536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.590690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.590704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.590776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.590791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.590996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.591144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.591242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.591346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.591568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.591733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.591887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.591979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.591994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.592198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.592212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.592287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.592303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.592373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.592392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.592463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.592476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.592676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.592692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.592922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.592936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.593003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.593017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.593263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.593277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.593525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.593556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.593654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.593674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.593767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.593787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.593896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.593916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.594156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.594175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.594338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.594357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.594460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.594476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.594576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.594590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.594687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.594703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.594812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.594831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.594922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.594937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.595083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.595098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.595172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.595187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.595348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.595364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.595456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.595481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.595634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.595655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.595817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.417 [2024-07-12 11:44:28.595837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.417 qpair failed and we were unable to recover it. 00:38:42.417 [2024-07-12 11:44:28.595929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.595948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.596183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.596203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.596389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.596409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.596618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.596634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.596725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.596739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.596806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.596821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.596912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.596926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.597946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.597960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.598108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.598123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.598217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.598231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.598314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.598328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.598477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.598491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.598574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.598588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.598768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.598782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.598932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.598947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.599101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.599115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.599205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.599232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.599319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.599341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.599612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.599634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.599794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.599809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.599892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.599904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.599977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.599991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.600923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.600938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.601101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.601116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.601185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.601198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.601285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.601302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.418 [2024-07-12 11:44:28.601400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.418 [2024-07-12 11:44:28.601416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.418 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.601487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.601500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.601636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.601650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.601859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.601873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.601949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.601963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.602038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.602051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.602250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.602265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.602405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.602419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.602488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.602501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.602570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.602586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.602854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.602868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.602936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.602949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.603941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.603955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.604945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.604960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.605152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.605171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.605342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.605356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.605436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.605450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.605545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.605559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.605626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.605638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.605818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.605832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.606003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.606017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.606193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.606207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.606341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.606358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.606444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.419 [2024-07-12 11:44:28.606457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.419 qpair failed and we were unable to recover it. 00:38:42.419 [2024-07-12 11:44:28.606522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.606536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.606717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.606731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.606831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.606846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.606996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.607010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.607235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.607250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.607431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.607446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.607578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.607593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.607679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.607693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.607838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.607853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.607937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.607951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.608086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.608100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.608235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.608250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.608387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.608402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.608498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.608512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.608582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.608596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.608664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.608679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.608906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.608920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.609966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.609980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.610139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.610163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.610319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.610341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.610439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.610462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.610552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.610568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.610775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.610790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.610877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.610892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.611042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.611056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.611124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.611137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.611276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.611290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.611431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.611445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.611524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.611538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.611683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.611697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.611844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.611858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.420 qpair failed and we were unable to recover it. 00:38:42.420 [2024-07-12 11:44:28.612020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.420 [2024-07-12 11:44:28.612036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.612178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.612192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.612334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.612348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.612551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.612566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.612653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.612667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.612818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.612832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.612969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.612983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.613058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.613070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.613204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.613218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.613365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.613383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.613522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.613537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.613673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.613687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.613830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.613844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.614065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.614079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.614234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.614248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.614383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.614398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.614547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.614561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.617586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.617601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.617850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.617865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.617956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.617988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.618092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.618107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.618306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.618321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.618486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.618500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.618701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.618715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.618801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.618814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.618895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.618910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.619063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.619077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.619178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.619203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.619365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.619391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.619505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.619526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.619682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.619698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.619772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.619786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.619935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.619949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.620029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.620042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.620135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.620148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.620227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.620240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.620334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.620347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.620415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.620429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.620503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.620516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.421 qpair failed and we were unable to recover it. 00:38:42.421 [2024-07-12 11:44:28.620700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.421 [2024-07-12 11:44:28.620714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.620787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.620803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.620883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.620896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.620965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.620979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.621059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.621229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.621389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.621540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.621629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.621723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.621916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.621997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.622091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.622286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.622450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.622597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.622763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.622854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.622945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.622958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.623096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.623110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.623246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.623261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.623356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.623375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.623527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.623541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.623676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.623690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.623890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.623904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.623983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.623996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.624100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.624113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.624276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.624291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.624556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.624580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.624679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.624700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.624798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.624818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.624963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.624979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.625119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.625134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.625338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.625352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.625504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.625519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.625655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.625670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.625814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.625828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.625910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.625923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.625991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.626004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.626141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.422 [2024-07-12 11:44:28.626155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.422 qpair failed and we were unable to recover it. 00:38:42.422 [2024-07-12 11:44:28.626231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.626245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.626382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.626397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.626599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.626613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.626698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.626711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.626884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.626900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.627037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.627052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.627134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.627147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.627213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.627227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.627385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.627400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.627545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.627559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.627691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.627709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.627861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.627875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.628976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.628990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.629934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.629947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.630941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.630955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.631023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.631036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.631112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.631125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.423 [2024-07-12 11:44:28.631211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.423 [2024-07-12 11:44:28.631225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.423 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.631295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.631308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.631392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.631406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.631501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.631514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.631656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.631671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.631808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.631822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.631889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.631903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.632979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.632992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.633922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.633936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.634854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.634867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.635002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.424 [2024-07-12 11:44:28.635016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.424 qpair failed and we were unable to recover it. 00:38:42.424 [2024-07-12 11:44:28.635180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.635194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.635348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.635363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.635518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.635537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.635615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.635628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.635711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.635723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.635809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.635823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.635987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.636072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.636306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.636489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.636643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.636735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.636894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.636972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.636986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.637058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.637072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.637224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.637238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.637391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.637406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.637486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.637498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.637683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.637698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.637782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.637796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.637960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.637974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.638125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.638139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.638293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.638309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.638450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.638466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.638611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.638626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.638772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.638787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.638857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.638870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.638937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.638950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.639045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.639059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.639131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.639145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.639290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.639305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.639510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.639524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.425 [2024-07-12 11:44:28.639673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.425 [2024-07-12 11:44:28.639687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.425 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.639829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.639843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.639980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.639995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.640060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.640075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.640211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.640228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.640359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.640374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.640540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.640555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.640690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.640708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.640845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.640859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.640991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.641072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.641181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.641269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.641435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.641534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.641623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.641854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.641869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.642915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.642929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.643895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.643909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.644112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.644133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.644353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.644367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.644567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.644581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.644727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.644741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.644824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.644838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.645066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.645080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.645167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.426 [2024-07-12 11:44:28.645182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.426 qpair failed and we were unable to recover it. 00:38:42.426 [2024-07-12 11:44:28.645332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.645346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.645480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.645496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.645569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.645584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.645717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.645733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.645817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.645831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.645996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.646110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.646199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.646382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.646542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.646631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.646720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.646866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.646880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.647021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.647036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.647106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.647121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.647199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.647214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.647382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.647396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.647494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.647508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.647737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.647752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.647840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.647855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.648935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.648948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.649944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.649958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.650039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.650053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.650120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.650134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.650202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.650216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.650290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.650304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.427 [2024-07-12 11:44:28.650370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.427 [2024-07-12 11:44:28.650388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.427 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.650615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.650629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.650759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.650773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.650848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.650865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.650943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.650957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.651981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.651995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.652128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.652142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.652365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.652399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.652465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.652480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.652556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.652571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.652717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.652740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.652820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.652834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.652932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.652947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.653946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.653959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.654047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.654061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.654135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.654149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.654283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.654298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.654437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.654452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.654585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.654599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.654683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.654698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.428 qpair failed and we were unable to recover it. 00:38:42.428 [2024-07-12 11:44:28.654829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.428 [2024-07-12 11:44:28.654843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.654918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.654931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.654995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.655077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.655226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.655380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.655542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.655650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.655796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.655943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.655957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.656036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.656184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.656410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.656492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.656616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.656717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.656903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.656989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.657958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.657973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.658920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.658934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.659086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.659101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.659179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.659194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.659275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.659289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.659455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.659469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.659603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.429 [2024-07-12 11:44:28.659618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.429 qpair failed and we were unable to recover it. 00:38:42.429 [2024-07-12 11:44:28.659777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.659792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.659958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.659972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.660055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.660246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.660327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.660433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.660605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.660704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.660782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.660989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.661076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.661265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.661411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.661514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.661674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.661775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.661855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.661869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.662028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.662042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.662121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.662136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.662337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.662353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.662534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.662549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.662648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.662662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.662802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.662816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.662998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.663982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.663996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.664077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.664091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.664159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.664173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.664332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.664359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.664524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.664548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.664796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.664818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.430 [2024-07-12 11:44:28.664900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.430 [2024-07-12 11:44:28.664916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.430 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.664992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.665158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.665271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.665360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.665518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.665666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.665773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.665955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.665969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.666870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.666884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.667914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.667928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.668977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.668992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.669057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.669075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.669156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.669170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.669250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.669273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.669384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.669408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.669517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.669539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.669627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.669642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.431 [2024-07-12 11:44:28.669714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.431 [2024-07-12 11:44:28.669727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.431 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.669892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.669907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.670812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.670828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.671044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.671132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.671275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.671437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.671582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.671813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.671912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.671998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.672107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.672203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.672290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.672448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.672541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.672708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.672876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.672890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.673977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.673992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.674135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.674149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.674358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.674372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.674449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.674463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.674643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.674665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.674844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.674866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.674979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.675000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.675091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.675107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.675172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.675185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.432 qpair failed and we were unable to recover it. 00:38:42.432 [2024-07-12 11:44:28.675267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.432 [2024-07-12 11:44:28.675281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.675420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.675435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.675574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.675588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.675665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.675679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.675769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.675783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.675849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.675864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.675931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.675945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.676977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.676991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.677261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.677276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.677446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.677461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.677548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.677562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.677694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.677708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.677842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.677856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.677951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.677966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.678107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.678125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.678273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.678288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.678434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.678458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.678532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.678546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.678748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.678762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.678852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.678867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.678945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.678960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.679042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.679056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.679195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.679209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.679283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.679297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.679366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.679384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.679536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.679550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.679740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.679754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.433 qpair failed and we were unable to recover it. 00:38:42.433 [2024-07-12 11:44:28.679905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.433 [2024-07-12 11:44:28.679919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.680915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.680929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.681918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.681932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.682003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.682017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.682179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.682193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.682350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.682364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.682449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.682464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.682604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.682618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.682844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.682858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.682938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.682952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.683031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.683046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.683182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.683197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.683374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.683392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.683477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.683492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.683657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.683671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.683763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.683777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.683927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.683942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.684181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.684195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.684344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.684358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.684527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.684542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.684609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.684622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.684689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.684703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.684803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.684817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.684958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.684972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.685055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.685070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.685215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.685229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.685314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.434 [2024-07-12 11:44:28.685328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.434 qpair failed and we were unable to recover it. 00:38:42.434 [2024-07-12 11:44:28.685537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.685560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.685650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.685672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.685820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.685842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.685918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.685934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.686917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.686931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.687940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.687954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.688895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.688909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.689133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.689147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.689226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.689241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.689327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.689341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.689479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.689494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.689579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.689594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.689728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.689742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.689882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.689896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.690039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.690054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.690276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.690291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.435 [2024-07-12 11:44:28.690441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.435 [2024-07-12 11:44:28.690469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.435 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.690637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.690658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.690759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.690781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.690990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.691005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.691146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.691160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.691249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.691263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.691353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.691367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.691527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.691542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.691631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.691646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.691842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.691856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.692031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.692045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.692120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.692135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.692227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.692241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.692398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.692412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.692571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.692585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.692790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.692804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.692952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.692966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.693113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.693127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.693212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.693227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.693299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.693313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.693445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.693460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.693596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.693610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.693691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.693706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.693909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.693923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.694858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.694873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.695074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.695088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.695243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.695258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.695328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.695340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.695407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.695421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.695624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.695639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.695713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.695727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.695882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.695897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.696034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.696048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.696131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.696148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.436 [2024-07-12 11:44:28.696283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.436 [2024-07-12 11:44:28.696297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.436 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.696370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.696392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.696471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.696485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.696557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.696571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.696637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.696654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.696882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.696896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.696968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.696981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.697974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.697989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.698082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.698097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.698173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.698188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.698261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.698276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.698350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.698364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.698519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.698534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.698688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.698703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.698901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.698916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.699952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.699966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.700120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.700134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.700210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.700225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.700305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.700319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.437 [2024-07-12 11:44:28.700478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.437 [2024-07-12 11:44:28.700493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.437 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.700558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.700571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.700735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.700749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.700832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.700846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.700926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.700941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.701089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.701107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.701254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.701268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.701338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.701352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.701419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.701432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.701566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.701580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.701651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.701666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.701824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.701840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.702904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.702917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.703888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.703903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.704062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.704208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.704438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.704537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.704630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.704739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.704833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.704985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.705000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.705098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.705112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.705248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.705267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.705427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.705442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.705643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.705658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.705730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.438 [2024-07-12 11:44:28.705744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.438 qpair failed and we were unable to recover it. 00:38:42.438 [2024-07-12 11:44:28.705820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.705834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.705914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.705929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.705988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.706140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.706240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.706354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.706473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.706630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.706803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.706948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.706963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.707123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.707212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.707369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.707481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.707653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.707735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.707831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.707987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.708973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.708987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.709080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.709095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.709164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.709178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.709322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.709337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.709474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.709489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.709652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.709666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.709823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.709838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.709904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.709916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.710065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.710153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.710340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.710439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.710538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.710624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.439 [2024-07-12 11:44:28.710800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.439 qpair failed and we were unable to recover it. 00:38:42.439 [2024-07-12 11:44:28.710960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.710974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.711930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.711945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.440 [2024-07-12 11:44:28.712023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.440 [2024-07-12 11:44:28.712038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.440 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.712119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.712134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.712232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.712248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.712339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.712355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.712501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.712517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.712654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.712669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.712735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.712749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.712904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.712923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.713973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.713988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.714124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.714139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.714225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.714239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.714301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.714314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.714459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.714474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.714546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.714560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.714699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.714713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.714846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.714861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.715004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.715018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.715134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.715149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.715262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.715276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.715411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.715425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.715505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.715520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.715678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.715693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.745 [2024-07-12 11:44:28.715799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.745 [2024-07-12 11:44:28.715814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.745 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.715983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.715998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.716151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.716166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.716317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.716331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.716409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.716427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.716652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.716666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.716753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.716767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.716950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.716964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.717038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.717053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.717214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.717230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.717366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.717387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.717522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.717537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.717648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.717663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.717829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.717843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.717937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.717951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.718847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.718992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.746 [2024-07-12 11:44:28.719880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.746 [2024-07-12 11:44:28.719894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.746 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.719962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.719977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.720912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.720927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.721097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.721112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.721285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.721300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.721386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.721410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.721498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.721512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.721714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.721728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.721863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.721878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.721947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.721961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.722863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.722877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.723022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.723189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.723339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.723449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.723554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.723660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.723844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.723986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.747 [2024-07-12 11:44:28.724002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.747 qpair failed and we were unable to recover it. 00:38:42.747 [2024-07-12 11:44:28.724076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.724090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.724180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.724195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.724298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.724313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.724446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.724462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.724610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.724627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.724786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.724800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.724947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.724962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.725120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.725135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.725204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.725219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.725368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.725387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.725522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.725537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.725667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.725682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.725755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.725769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.725844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.725860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.726053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.726067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.726132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.726147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.726353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.726368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.726509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.726525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.726671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.726685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.726870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.726889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.727077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.727093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.727281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.727296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.727368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.727387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.727539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.727554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.727694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.727710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.727851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.727866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.728014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.728029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.748 [2024-07-12 11:44:28.728181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.748 [2024-07-12 11:44:28.728195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.748 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.728330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.728344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.728429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.728444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.728681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.728696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.728874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.728889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.728987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.729184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.729351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.729462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.729604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.729705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.729819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.729928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.729948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.730134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.730154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.730236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.730252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.730431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.730446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.730581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.730595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.730685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.730700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.730833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.730848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.730960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.730994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.731959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.731979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.732055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.732074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.749 [2024-07-12 11:44:28.732150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.749 [2024-07-12 11:44:28.732169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.749 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.732258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.732276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.732477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.732495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.732594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.732613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.732689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.732705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.732771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.732790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.732947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.732962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.733916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.733931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.734029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.734119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.734206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.734365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.734531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.734688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.750 [2024-07-12 11:44:28.734790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.750 qpair failed and we were unable to recover it. 00:38:42.750 [2024-07-12 11:44:28.734923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.734938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.735894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.735922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.736894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.736909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.737113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.737128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.737195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.737208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.737276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.737292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.737432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.737451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.737528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.737543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.737746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.737761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.737898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.737913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.738905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.738920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.739057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.751 [2024-07-12 11:44:28.739073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.751 qpair failed and we were unable to recover it. 00:38:42.751 [2024-07-12 11:44:28.739151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.739308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.739405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.739508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.739654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.739734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.739819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.739922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.739937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.740908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.740924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.741074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.741089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.741229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.741244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.741388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.741403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.741493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.741508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.741652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.741667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.741798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.741813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.741896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.741911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.742951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.742965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.752 [2024-07-12 11:44:28.743043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.752 [2024-07-12 11:44:28.743057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.752 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.743200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.743214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.743355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.743369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.743460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.743475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.743581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.743595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.743662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.743676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.743813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.743827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.743997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.744838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.744994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.745925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.745946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.746152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.746167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.746300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.746315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.746388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.746402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.746482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.746496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.746569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.746584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.746719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.746733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.746867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.746881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.747015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.747029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.747094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.747107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.747254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.747268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.747327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.753 [2024-07-12 11:44:28.747342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.753 qpair failed and we were unable to recover it. 00:38:42.753 [2024-07-12 11:44:28.747439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.747454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.747590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.747605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.747682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.747696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.747834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.747848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.747913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.747927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.747998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.748917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.748932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.749084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.749099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.749171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.749186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.749372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.749399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.749533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.749566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.749705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.749719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.749800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.749814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.749956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.749970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.750138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.750152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.750216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.750230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.750310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.750325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.750529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.750544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.750631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.750653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.750805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.750826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.750911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.750933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.751935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.751950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.752012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.752026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.752096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.752111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.752310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.754 [2024-07-12 11:44:28.752324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.754 qpair failed and we were unable to recover it. 00:38:42.754 [2024-07-12 11:44:28.752413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.752428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.752571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.752586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.752671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.752686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.752767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.752782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.752910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.752924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.752986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.753962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.753976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.754922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.754936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.755021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.755035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.755243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.755257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.755421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.755444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.755603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.755625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.755732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.755754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.755830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.755846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.755926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.755941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.756021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.756035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.756103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.756118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.756184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.756198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.755 qpair failed and we were unable to recover it. 00:38:42.755 [2024-07-12 11:44:28.756319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.755 [2024-07-12 11:44:28.756333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.756405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.756419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.756519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.756534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.756743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.756757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.756825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.756840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.756984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.756998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.757064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.757079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.757263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.757303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.757503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.757547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.757738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.757781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.757980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.758001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.758080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.758118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.758241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.758281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.758475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.758516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.758713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.758753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.758870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.758910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.759076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.759091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.759242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.759294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.759490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.759532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.759753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.759793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.759949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.759963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.760102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.760143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.760342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.760395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.760593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.760634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.760891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.760910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.761015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.761036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.761249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.761290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.761431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.761474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.761757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.761799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.762002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.762042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.762229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.762269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.762492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.762534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.762685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.762733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.763021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.763063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.763285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.763327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.763541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.763583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.763811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.763864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.763969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.763986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.764058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.764073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.764171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.764211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.764336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.756 [2024-07-12 11:44:28.764387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.756 qpair failed and we were unable to recover it. 00:38:42.756 [2024-07-12 11:44:28.764537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.764577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.764697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.764737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.764996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.765011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.765107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.765121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.765199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.765213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.765373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.765395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.765580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.765620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.765823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.765863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.766052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.766092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.766374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.766426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.766614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.766654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.766816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.766857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.767132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.767172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.767372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.767425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.767614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.767654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.767944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.767958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.768854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.768868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.769001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.769041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.769209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.769249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.769393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.769434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.769632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.769672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.769929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.769969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.770191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.770231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.770427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.770470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.770715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.770731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.770884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.770899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.771048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.771063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.771245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.771285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.771412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.771453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.771667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.771707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.771864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.771878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.772024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.772038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.772186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.772200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.772291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.757 [2024-07-12 11:44:28.772328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.757 qpair failed and we were unable to recover it. 00:38:42.757 [2024-07-12 11:44:28.772599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.772641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.772842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.772883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.773016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.773056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.773256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.773297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.773522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.773564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.773768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.773809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.774006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.774046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.774241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.774281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.774422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.774464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.774742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.774782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.774914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.774955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.775212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.775226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.775390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.775405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.775567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.775610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.775799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.775851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.776050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.776091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.776290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.776331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.776553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.776596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.776811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.776851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.777052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.777093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.777399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.777440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.777568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.777583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.777747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.777782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.777977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.778017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.778222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.778262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.778515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.778556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.778714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.778728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.778938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.778978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.779173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.779213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.779359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.779409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.779661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.779708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.779895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.779936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.780075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.780115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.780257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.780297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.780468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.780512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.780650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.780690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.780880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.780921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.781044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.781084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.781316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.781356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.781559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.781601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.758 [2024-07-12 11:44:28.781815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.758 [2024-07-12 11:44:28.781856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.758 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.782058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.782099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.782334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.782375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.782613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.782654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.782789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.782803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.782961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.782987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.783076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.783090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.783159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.783172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.783262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.783300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.783526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.783568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.783769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.783810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.783976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.783991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.784123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.784137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.784281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.784296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.784450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.784492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.784686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.784727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.784982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.785018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.785159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.785173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.785308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.785322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.785587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.785628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.785824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.785839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.786013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.786054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.786185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.786226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.786446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.786487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.786686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.786700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.786932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.786973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.787117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.787158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.787344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.787393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.787527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.787568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.787790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.787830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.759 qpair failed and we were unable to recover it. 00:38:42.759 [2024-07-12 11:44:28.788082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.759 [2024-07-12 11:44:28.788128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.788335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.788375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.788615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.788655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.788911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.788926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.789127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.789141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.789310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.789325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.789466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.789481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.789692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.789733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.789871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.789924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.790220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.790260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.790402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.790444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.790642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.790682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.790823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.790861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.791062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.791077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.791246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.791261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.791428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.791470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.791658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.791698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.791838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.791887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.792090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.792105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.792239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.792254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.792457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.792498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.792754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.792795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.792997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.793037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.793251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.760 [2024-07-12 11:44:28.793291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.760 qpair failed and we were unable to recover it. 00:38:42.760 [2024-07-12 11:44:28.793488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.793530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.793797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.793811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.794015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.794055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.794320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.794361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.794646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.794688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.794876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.794920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.795146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.795161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.795293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.795307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.795512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.795527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.795668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.795682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.795829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.795843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.795926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.795959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.796239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.796280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.796485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.796527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.796802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.796841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.796971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.797977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.797991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.798106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.798120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.798183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.798196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.798302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.798315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.798514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.798529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.798678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.798693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.798805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.798845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.761 [2024-07-12 11:44:28.799001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.761 [2024-07-12 11:44:28.799042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.761 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.799319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.799359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.799653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.799694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.799897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.799949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.800034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.800047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.800134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.800147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.800354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.800420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.800608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.800648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.800924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.800964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.801150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.801190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.801340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.801392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.801624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.801665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.801880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.801920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.802112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.802126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.802239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.802255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.802470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.802513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.802714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.802754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.803030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.803045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.803194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.803213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.803368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.803416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.803673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.803713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.803967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.804008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.804193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.804233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.804488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.804529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.804653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.804667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.804758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.804771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.804946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.804994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.805182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.805223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.762 [2024-07-12 11:44:28.805429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.762 [2024-07-12 11:44:28.805480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.762 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.805618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.805658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.805941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.805981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.806178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.806192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.806448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.806490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.806699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.806740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.807014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.807055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.807242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.807283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.807499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.807541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.807734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.807775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.807922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.807963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.808158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.808198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.808408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.808450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.808581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.808621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.808738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.808779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.808976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.808991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.809098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.809140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.809356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.809403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.809659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.809699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.809909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.809924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.810009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.810022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.810155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.810169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.810299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.810313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.810394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.810408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.810540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.810554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.810786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.810826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.811080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.811120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.811268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.811309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.811514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.811555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.763 [2024-07-12 11:44:28.811826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.763 [2024-07-12 11:44:28.811840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.763 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.811935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.811975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.812222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.812263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.812480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.812521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.812712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.812752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.812898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.812913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.813045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.813060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.813209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.813250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.813464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.813506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.813755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.813800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.814070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.814111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.814314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.814354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.814581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.814622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.814725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.814738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.814879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.814894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.815115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.815130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.815334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.815348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.815429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.815443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.815589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.815603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.815741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.815755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.815964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.816004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.816136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.816177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.816397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.816439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.816634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.816674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.816948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.816998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.817236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.817260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.817409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.817424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.817643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.817683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.817828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.817868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.818033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.818078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.818302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.818317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.818468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.818483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.818594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.818634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.818908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.818949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.819099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.819140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.819311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.819326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.819463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.819477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.819607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.819621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.764 [2024-07-12 11:44:28.819792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.764 [2024-07-12 11:44:28.819832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.764 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.820031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.820072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.820300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.820341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.820568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.820609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.820801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.820816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.820889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.820902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.820972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.820986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.821179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.821222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.821349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.821400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.821588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.821628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.821848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.821862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.822016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.822062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.822312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.822353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.822569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.822610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.822762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.822776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.822853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.822866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.823000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.823014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.823244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.823285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.823503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.823545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.823686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.823727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.823836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.823849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.823986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.824001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.824264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.824304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.824490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.824531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.824744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.824783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.825065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.825105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.825293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.825333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.825486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.825527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.825788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.825829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.826038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.826079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.826297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.826311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.826404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.826418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.826556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.826570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.826717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.826731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.765 [2024-07-12 11:44:28.826860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.765 [2024-07-12 11:44:28.826874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.765 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.827032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.827073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.827352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.827405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.827611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.827652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.828000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.828086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.828345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.828411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.828705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.828748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.828944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.829002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.829165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.829185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.829363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.829390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.829574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.829595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.829739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.829758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.829921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.829941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.830092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.830135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.830341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.830390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.830590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.830630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.830867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.830882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.831018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.831034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.831191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.831231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.831485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.831526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.831809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.831849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.831998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.832047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.832174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.832214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.832431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.832474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.832755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.832798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.833057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.833071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.833157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.833178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.833267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.833280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.833487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.833531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.833805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.833854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.833939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.833954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.834106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.834121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.834329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.834369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.834582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.834622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.834754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.834768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.834903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.834917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.835000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.835013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.835154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.835169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.835336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.835384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.835517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.835557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.835812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.766 [2024-07-12 11:44:28.835852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.766 qpair failed and we were unable to recover it. 00:38:42.766 [2024-07-12 11:44:28.836125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.836166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.836302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.836344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.836682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.836770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.837030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.837054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.837239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.837259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.837336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.837405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.837568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.837610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.837870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.837912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.838100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.838141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.838402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.838445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.838590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.838631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.838762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.838810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.839022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.839041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.839185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.839205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.839445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.839491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.839698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.839738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.839935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.839952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.840039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.840053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.840224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.840239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.840336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.840374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.840604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.840645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.840789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.840830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.841086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.841101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.841256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.841270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.841422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.841464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.841654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.841694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.841832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.841872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.842071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.842112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.842300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.842340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.842534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.842577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.842719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.842759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.842943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.842983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.843213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.843228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.843381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.843397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.843549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.843589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.843799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.843839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.844042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.844082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.844218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.844259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.844411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.844454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.844762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.844804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.844937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.767 [2024-07-12 11:44:28.844978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.767 qpair failed and we were unable to recover it. 00:38:42.767 [2024-07-12 11:44:28.845172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.845186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.845261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.845274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.845519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.845562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.845747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.845769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.845935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.845956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.846048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.846066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.846243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.846263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.846374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.846430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.846646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.846687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.846885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.846926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.847019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.847039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.847249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.847265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.847467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.847495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.847639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.847679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.847901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.847940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.848085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.848140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.848338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.848353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.848512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.848527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.848608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.848662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.848883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.848924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.849143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.849181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.849332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.849346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.849480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.849495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.849577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.849590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.849813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.849854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.850050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.850090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.850291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.850331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.850593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.850635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.850789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.850829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.851056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.851118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.851387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.851429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.851617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.851657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.851850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.851865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.851947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.851961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.852118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.852158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.852410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.852451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.852718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.852758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.852958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.852972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.853187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.853202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.853429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.853471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.853620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.768 [2024-07-12 11:44:28.853660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.768 qpair failed and we were unable to recover it. 00:38:42.768 [2024-07-12 11:44:28.853911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.853951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.854106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.854120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.854196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.854209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.854323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.854338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.854402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.854416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.854584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.854598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.854675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.854688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.854908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.854922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.855126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.855173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.855364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.855425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.855569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.855609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.855909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.855950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.856140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.856154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.856320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.856360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.856573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.856621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.856761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.856800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.856991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.857005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.857137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.857151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.857326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.857366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.857636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.857677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.857959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.858000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.858186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.858226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.858351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.858403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.858678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.858718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.858906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.858920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.859002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.859015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.859163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.859177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.859323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.859337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.859482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.859524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.859777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.859818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.860013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.860053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.860153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.860166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.860403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.860445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.860576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.860616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.860805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.860845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.861095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.861109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.861282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.861297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.769 qpair failed and we were unable to recover it. 00:38:42.769 [2024-07-12 11:44:28.861524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.769 [2024-07-12 11:44:28.861567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.861788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.861828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.861973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.862013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.862218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.862258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.862398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.862439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.862657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.862698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.862814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.862829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.862957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.862972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.863177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.863191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.863389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.863405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.863552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.863570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.863645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.863658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.863886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.863900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.863965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.863978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.864861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.864875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.865108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.865122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.865195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.865208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.865338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.865353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.865449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.865478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.865608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.865648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.865786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.865825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.865945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.865987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.866260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.866300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.866581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.866623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.866819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.866859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.867089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.867130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.867316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.867357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.867571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.867612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.867813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.867827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.867899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.867913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.770 [2024-07-12 11:44:28.868059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.770 [2024-07-12 11:44:28.868074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.770 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.868222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.868262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.868451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.868492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.868764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.868804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.868993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.869034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.869173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.869213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.869313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.869327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.869407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.869423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.869487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.869523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.869800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.869840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.870097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.870137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.870270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.870310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.870522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.870563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.870747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.870787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.871038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.871052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.871264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.871278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.871421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.871436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.871611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.871652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.871852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.871893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.872088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.872128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.872328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.872368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.872647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.872688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.872830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.872870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.873058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.873098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.873231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.873285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.873429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.873444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.873600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.873614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.873747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.873761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.873863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.873903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.874096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.874137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.874438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.874480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.874735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.874777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.874963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.875004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.875176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.875190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.875364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.875383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.875466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.875479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.875636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.875677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.875956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.875996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.876182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.876222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.876444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.876486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.876709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.876749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.771 [2024-07-12 11:44:28.876971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.771 [2024-07-12 11:44:28.877015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.771 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.877215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.877266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.877469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.877511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.877698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.877740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.877991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.878031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.878165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.878205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.878387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.878403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.878488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.878501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.878639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.878653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.878736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.878750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.878883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.878897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.879038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.879077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.879212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.879251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.879463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.879505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.879710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.879750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.879941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.879981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.880261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.880301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.880561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.880602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.880749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.880789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.881006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.881020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.881256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.881296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.881428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.881470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.881671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.881711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.881919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.881958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.882255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.882296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.882572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.882614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.882748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.882788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.882990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.883031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.883285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.883326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.883525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.883566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.883843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.883884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.884080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.884121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.884316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.884356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.884565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.884606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.884728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.884768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.885031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.885071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.885316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.885357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.885517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.885557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.885819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.885860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.886072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.886112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.772 [2024-07-12 11:44:28.886303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.772 [2024-07-12 11:44:28.886342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.772 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.886560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.886600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.886827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.886867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.887120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.887161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.887288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.887302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.887449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.887465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.887610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.887746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.888011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.888052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.888270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.888310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.888432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.888446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.888597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.888611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.888761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.888775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.888982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.889023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.889159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.889198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.889323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.889362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.889575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.889616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.889749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.889789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.890027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.890067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.890330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.890344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.890564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.890582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.890655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.890667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.890830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.890870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.891083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.891123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.891249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.891290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.891536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.891579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.891733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.891785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.891928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.891968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.892150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.892189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.892387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.892430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.892699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.892740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.892943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.892984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.893135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.893149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.893236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.893248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.893404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.893420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.893501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.893514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.893654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.893695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.893904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.893945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.894142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.894176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.894323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.894338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.894421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.894435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.773 [2024-07-12 11:44:28.894517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.773 [2024-07-12 11:44:28.894530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.773 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.894709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.894750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.894965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.895005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.895262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.895306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.895456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.895471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.895560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.895573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.895783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.895800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.895976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.895990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.896070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.896082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.896172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.896186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.896319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.896333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.896504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.896519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.896595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.896607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.896771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.896787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.896958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.896998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.897192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.897233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.897354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.897418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.897545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.897584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.897708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.897748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.897901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.897940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.898071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.898110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.898332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.898347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.898571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.898586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.898740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.898754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.898903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.898943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.899127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.899166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.899300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.899341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.899492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.899533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.899668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.899708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.899842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.899893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.900068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.900082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.774 qpair failed and we were unable to recover it. 00:38:42.774 [2024-07-12 11:44:28.900165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.774 [2024-07-12 11:44:28.900178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.900334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.900349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.900430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.900444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.900657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.900699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.900836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.900876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.901147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.901188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.901387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.901402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.901573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.901588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.901749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.901763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.901921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.901936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.902031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.902043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.902134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.902177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.902312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.902351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.902576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.902617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.902745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.902784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.903071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.903123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.903259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.903300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.903527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.903543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.903682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.903736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.903940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.903979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.904181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.904221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.904403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.904417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.904574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.904615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.904868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.904908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.905023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.905064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.905317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.905332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.905480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.905494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.905639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.905654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.905808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.905821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.906051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.906091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.906232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.906271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.775 qpair failed and we were unable to recover it. 00:38:42.775 [2024-07-12 11:44:28.906404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.775 [2024-07-12 11:44:28.906445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.906577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.906618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.906813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.906854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.907058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.907098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.907269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.907283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.907432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.907474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.907736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.907777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.907989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.908030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.908172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.908212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.908439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.908481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.908625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.908665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.908858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.908898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.909099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.909137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.909271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.909311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.909530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.909572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.909833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.909873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.910072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.910113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.910240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.910290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.910366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.910385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.910543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.910585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.910713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.910753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.910937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.910976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.911166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.911181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.911407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.911450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.911571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.911617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.911738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.911778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.911976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.912016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.912197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.912211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.912374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.912426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.912645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.912684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.776 [2024-07-12 11:44:28.912829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.776 [2024-07-12 11:44:28.912870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.776 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.913121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.913162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.913281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.913320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.913456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.913496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.913681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.913722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.913914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.913928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.914062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.914076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.914224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.914239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.914385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.914400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.914605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.914645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.914924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.914965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.915164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.915178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.915343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.915392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.915584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.915623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.915903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.915944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.916219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.916266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.916398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.916412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.916490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.916503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.916581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.916616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.916735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.916776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.916982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.917023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.917223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.917241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.917324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.917337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.917469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.917484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.917650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.917690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.917914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.917954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.918179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.918229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.918400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.918415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.918491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.918505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.918669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.918683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.918892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.918932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.919065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.919105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.919295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.919340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.919512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.919527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.919738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.919754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.919891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.777 [2024-07-12 11:44:28.919906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.777 qpair failed and we were unable to recover it. 00:38:42.777 [2024-07-12 11:44:28.920050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.920090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.920220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.920260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.920471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.920512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.920699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.920738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.921001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.921042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.921167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.921207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.921356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.921370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.921503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.921518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.921585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.921599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.921748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.921762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.921923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.921964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.922147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.922188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.922465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.922507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.922637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.922676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.922815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.922854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.923128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.923168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.923429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.923444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.923519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.923533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.923678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.923692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.923902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.923942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.924124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.924139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.924365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.924384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.924589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.924603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.924832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.924873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.925031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.925070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.925291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.925331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.925491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.925532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.925663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.925704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.925906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.925947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.926138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.926151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.926357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.926408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.926602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.926641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.926918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.778 [2024-07-12 11:44:28.926958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.778 qpair failed and we were unable to recover it. 00:38:42.778 [2024-07-12 11:44:28.927279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.927319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.927600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.927641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.927832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.927872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.928048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.928062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.928154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.928168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.928327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.928344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.928488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.928504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.928594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.928607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.928863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.928903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.929032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.929071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.929349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.929396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.929536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.929577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.929767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.929808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.930022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.930063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.930266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.930306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.930495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.930511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.930597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.930618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.930772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.930787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.930873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.930886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.931084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.931099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.931240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.931280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.931474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.931514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.931738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.931778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.931966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.932006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.779 [2024-07-12 11:44:28.932140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.779 [2024-07-12 11:44:28.932180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.779 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.932357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.932372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.932465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.932479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.932629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.932644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.932712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.932726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.932869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.932883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.932974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.933018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.933156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.933196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.933405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.933490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.933764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.933848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.934051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.934136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.934395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.934438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.934695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.934735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.934893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.934934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.935083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.935097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.935303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.935344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.935553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.935604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.935773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.935826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.935974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.936016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.936215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.936257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.936448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.936469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.936671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.936720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.936922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.936964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.937172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.937213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.937491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.937534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.937791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.937833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.937967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.938021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.938229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.938270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.938461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.780 [2024-07-12 11:44:28.938504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.780 qpair failed and we were unable to recover it. 00:38:42.780 [2024-07-12 11:44:28.938767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.938809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.939008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.939049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.939329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.939388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.939535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.939555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.939795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.939815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.940031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.940073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.940282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.940323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.940549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.940592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.940790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.940832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.940969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.941010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.941274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.941316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.941559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.941601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.941749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.941790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.941927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.941968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.942111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.942152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.942287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.942328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.942485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.942528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.942742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.942784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.942917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.942958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.943122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.943185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.943341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.943395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.943611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.943658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.943983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.944026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.944230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.944271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.944565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.944608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.944878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.944919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.945124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.945168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.945264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.945284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.945355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.945373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.945535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.945556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.945751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.945791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.946006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.946045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.946186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.946241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.946487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.946508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.946747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.946768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.946859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.946877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.781 [2024-07-12 11:44:28.947040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.781 [2024-07-12 11:44:28.947060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.781 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.947270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.947289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.947400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.947420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.947579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.947621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.947755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.947794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.947933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.947973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.948282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.948324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.948549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.948591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.948783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.948824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.948962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.949004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.949305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.949352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.949642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.949685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.949956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.949997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.950262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.950303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.950579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.950622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.950832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.950873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.951094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.951134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.951323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.951365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.951522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.951542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.951695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.951715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.951790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.951808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.951895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.951913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.952172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.952217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.952408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.952493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.952775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.952859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.953144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.953190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.953406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.953422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.953624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.953639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.953716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.953729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.953858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.953873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.954086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.954100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.954182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.954195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.954369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.954389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.954465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.954478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.954624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.954639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.954843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.954857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.955012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.955031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.955242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.955257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.955336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.955349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.955436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.955456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.955541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.782 [2024-07-12 11:44:28.955554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.782 qpair failed and we were unable to recover it. 00:38:42.782 [2024-07-12 11:44:28.955641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.955656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.955800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.955814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.955880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.955893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.956977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.956991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.957139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.957154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.957230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.957244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.957393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.957409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.957492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.957505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.957705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.957720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.957810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.957823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.957967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.957982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.958054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.958067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.958224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.958240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.958311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.958324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.958412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.958426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.958597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.958623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.958802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.958845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.958937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.958966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.959111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.959127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.959212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.959226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.959330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.959343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.959432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.959467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.959605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.959620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.959799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.959814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.959982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.959997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.960154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.960168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.960247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.960260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.960398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.960413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.960563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.960579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.960661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.960675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.960878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.960893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.960973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.960986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.961067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.961081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.961152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.783 [2024-07-12 11:44:28.961167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.783 qpair failed and we were unable to recover it. 00:38:42.783 [2024-07-12 11:44:28.961310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.961323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.961468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.961483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.961553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.961566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.961631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.961645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.961711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.961725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.961816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.961830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.961913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.961927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.962010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.962022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.962177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.962191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.962396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.962410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.962571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.962585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.962723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.962738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.962816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.962830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.962902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.962915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.963929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.963952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.964159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.964265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.964435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.964538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.964619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.964699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.964861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.964994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.965009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.965157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.965173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.965305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.965319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.965404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.965419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.965497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.784 [2024-07-12 11:44:28.965511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.784 qpair failed and we were unable to recover it. 00:38:42.784 [2024-07-12 11:44:28.965657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.965673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.965740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.965753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.965951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.965965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.966883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.966897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.967044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.967060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.967227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.967242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.967460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.967475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.967572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.967586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.967649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.967662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.967799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.967814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.967900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.967915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.968043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.968058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.968159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.968174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.968320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.968340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.968491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.968506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.968601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.968615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.968764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.968778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.968851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.968864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.969948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.969963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.970031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.970044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.970115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.970129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.970260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.970275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.970345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.970359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.970431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.970444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.970514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.970526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.785 [2024-07-12 11:44:28.970670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.785 [2024-07-12 11:44:28.970686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.785 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.970846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.970862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.971982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.971997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.972823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.972836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.973936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.973952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.974035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.974048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.974248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.974262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.974346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.974360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.974505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.974520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.974667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.974681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.974766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.974781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.974868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.974882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.975087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.975100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.975181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.975194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.975283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.975298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.975441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.786 [2024-07-12 11:44:28.975456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.786 qpair failed and we were unable to recover it. 00:38:42.786 [2024-07-12 11:44:28.975545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.975559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.975709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.975724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.975879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.975895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.975973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.975992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.976907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.976922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.977963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.977977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.978927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.978941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.979933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.979945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.980098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.980112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.980196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.787 [2024-07-12 11:44:28.980209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.787 qpair failed and we were unable to recover it. 00:38:42.787 [2024-07-12 11:44:28.980292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.980306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.980388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.980403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.980483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.980498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.980640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.980656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.980806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.980820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.980890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.980905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.981036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.981050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.981128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.981141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.981309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.981323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.981489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.981504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.981663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.981677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.981743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.981757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.981906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.981921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.982983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.982998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.983160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.983174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.983263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.983278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.983351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.983365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.983525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.983540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.983697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.983713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.983782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.983805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.983949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.983963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.984096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.788 [2024-07-12 11:44:28.984111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.788 qpair failed and we were unable to recover it. 00:38:42.788 [2024-07-12 11:44:28.984190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.984288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.984375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.984524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.984616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.984718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.984805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.984908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.984922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.985063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.985078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.985229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.985244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.985331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.985346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.985479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.985493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.985624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.985639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.985855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.985870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.986981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.986996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.987060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.987073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.987147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.987160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.987293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.987307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.987450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.987466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.987557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.987571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.987723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.987738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.987885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.987900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.988919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.988934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.989073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.789 [2024-07-12 11:44:28.989088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.789 qpair failed and we were unable to recover it. 00:38:42.789 [2024-07-12 11:44:28.989157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.989172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.989305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.989318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.989398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.989413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.989498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.989512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.989584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.989599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.989828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.989843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.989977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.989991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.990848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.990863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.991972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.991988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.992070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.992085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.992241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.992255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.992458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.992481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.992618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.992633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.992771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.992786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.992934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.992949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.993982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.993997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.994147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.994160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.994232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.994246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.994326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.994339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.790 [2024-07-12 11:44:28.994445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.790 [2024-07-12 11:44:28.994459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.790 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.994681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.994695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.994790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.994804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.994882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.994896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.994994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.995009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.995156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.995169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.995313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.995328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.995417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.995432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.995568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.995583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.995721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.995735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.995868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.995882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.996057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.996071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.996288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.996302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.996546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.996561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.996629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.996642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.996794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.996810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.996947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.996961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.997857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.997995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.998009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.998158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.998173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.998409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.998423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.998646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.998661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.998799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.998813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.998874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.998887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:28.999948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:28.999964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:29.000106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:29.000121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:29.000204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:29.000219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:29.000328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:29.000343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:29.000492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:29.000509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:29.000593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:29.000608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.791 [2024-07-12 11:44:29.000691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.791 [2024-07-12 11:44:29.000706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.791 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.000779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.000793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.000974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.000988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.001948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.001963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.002921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.002951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d280 (9): Bad file descriptor 00:38:42.792 [2024-07-12 11:44:29.003074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.003099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.003196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.003217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.003434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.003457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.003638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.003658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.003815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.003835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.004013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.004040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.004132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.004152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.004310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.004330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.004493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.004514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.004804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.004822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.005887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.005903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.792 qpair failed and we were unable to recover it. 00:38:42.792 [2024-07-12 11:44:29.006896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.792 [2024-07-12 11:44:29.006910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.006987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.007927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.007941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.008097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.008110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.008181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.008195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.008282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.008296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.008375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.008392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.008564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.008579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.008733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.008748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.008964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.008978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.009046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.009059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.009233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.009247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.009383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.009397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.009472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.009486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.009581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.009595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.009799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.009814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.009965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.009979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.010072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.793 [2024-07-12 11:44:29.010087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.793 qpair failed and we were unable to recover it. 00:38:42.793 [2024-07-12 11:44:29.010169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.010268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.010365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.010462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.010616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.010715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.010823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.010923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.010941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.011144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.011159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.011229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.011242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.011443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.011458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.011531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.011543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.011679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.011693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.011761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.011776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.011907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.011922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.012974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.012988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.013851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.013865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.014866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.014880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.015042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.794 [2024-07-12 11:44:29.015057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.794 qpair failed and we were unable to recover it. 00:38:42.794 [2024-07-12 11:44:29.015185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.015199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.015269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.015284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.015415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.015429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.015566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.015581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.015671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.015685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.015840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.015855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.016882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.016897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.017946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.017960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.018168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.018183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.018322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.018336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.018411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.018426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.018570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.018585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.018805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.018820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.018964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.018979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.019954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.019969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.020137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.020151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.020230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.020249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.020318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.020332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.020534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.795 [2024-07-12 11:44:29.020549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.795 qpair failed and we were unable to recover it. 00:38:42.795 [2024-07-12 11:44:29.020629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.020643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.020733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.020747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.020908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.020921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.021127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.021141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.021298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.021312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.021392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.021407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.021568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.021582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.021726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.021740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.021874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.021887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.022939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.022953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.023974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.023987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.024142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.024157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.024220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.024234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.024368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.024388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.024468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.024482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.024617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.024632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.024806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.024820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.024963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.024977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.025043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.025057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.025194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.025208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.025344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.025359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.025434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.025448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.025610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.025625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.025760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.025773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.025920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.025935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.026072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.026086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.026314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.796 [2024-07-12 11:44:29.026327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.796 qpair failed and we were unable to recover it. 00:38:42.796 [2024-07-12 11:44:29.026463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.026477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.026607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.026622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.026685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.026698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.026770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.026783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.026879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.026895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.027067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.027084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.027266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.027281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.027433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.027447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.027606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.027620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.027764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.027777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.027930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.027945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.028973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.028987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.029140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.029154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.029224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.029237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.029372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.029401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.029593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.029609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.029690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.029704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.029834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.029848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.029931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.029946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.030021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.030035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.030110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.030125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.030275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.030290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.030463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.797 [2024-07-12 11:44:29.030479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.797 qpair failed and we were unable to recover it. 00:38:42.797 [2024-07-12 11:44:29.030629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.030643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.030798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.030812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.030902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.030916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.031916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.031930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.032936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.032950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.033974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.033988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.034963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.034977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.035052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.035064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.035294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.035310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.035443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.798 [2024-07-12 11:44:29.035458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.798 qpair failed and we were unable to recover it. 00:38:42.798 [2024-07-12 11:44:29.035527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.035539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.035746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.035760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.035850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.035865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.035955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.035969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.036120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.036139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.036210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.036224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.036368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.036387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.036488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.036502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.036577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.036590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.036724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.036737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.036938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.036952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.037088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.037102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.037166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.037187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.037279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.037292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.037501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.037516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.037743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.037758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.037836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.037850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.037998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.038876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.038890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.039021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.039036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.039201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.039216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.039299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.039313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.039391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.039404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.039541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.039556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.039722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.039737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.039910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.039924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.799 qpair failed and we were unable to recover it. 00:38:42.799 [2024-07-12 11:44:29.040894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.799 [2024-07-12 11:44:29.040909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.040977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.040990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.041060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.041074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.041156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.041170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.041319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.041334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.041489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.041503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.041586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.041601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.041689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.041713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.041837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.041867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.042110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.042135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.042292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.042308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.042463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.042478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.042558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.042573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.042714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.042729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.042796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.042810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.042877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.042890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.043025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.043040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.043176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.043191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.043327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.043341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.043483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.043498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.043573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.043590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.043720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.043735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.043884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.043898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.044040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.044058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.044127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.044142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.044287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.044302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.044408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.044423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.044586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.044600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.044814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.044828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.044918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.044933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.045142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.045158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.045362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.045384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.045481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.045496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.045582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.045596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.045763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.045778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.045866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.045881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.045968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.045983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.046136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.046150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.046245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.046259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.046408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.046423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.046504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.800 [2024-07-12 11:44:29.046518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.800 qpair failed and we were unable to recover it. 00:38:42.800 [2024-07-12 11:44:29.046608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.046622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.046767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.046781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.047007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.047021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.047094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.047108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.047279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.047294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.047386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.047400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.047488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.047510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.047663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.047683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.047843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.047862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.048014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.048030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.048249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.048263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.048416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.048432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.048567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.048581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.048728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.048744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.048882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.048897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.049955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.049969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.050056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.050150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.050309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.050406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.050548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.050693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.050846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.050995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.051012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.051090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.051104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.051173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.051187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.051266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.051280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.051360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.051373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.801 [2024-07-12 11:44:29.051472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.801 [2024-07-12 11:44:29.051487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.801 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.051589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.051604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.051689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.051704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.051772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.051788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.051986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.052000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.052074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.052088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.052261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.052275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.052413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.052427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.052626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.052641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.052731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.052745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.052839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.052854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.053120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.053151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.053370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.053399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.053553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.053573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.053734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.053753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.053900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.053920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.054919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.054934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.055135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.055153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.055309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.055324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.055392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.055405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.055568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.055583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.055668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.055688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.055761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.055776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.055911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.055925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.056958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.057105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.057119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.802 [2024-07-12 11:44:29.057198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.802 [2024-07-12 11:44:29.057213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.802 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.057290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.057305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.057368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.057386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.057551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.057566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.057664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.057678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.057757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.057772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.057850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.057864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.057964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.057978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.058979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.058994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.059909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.059924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.060007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.060023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.060168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.060183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.060332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.060347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.060432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.060446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.060579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.060594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.060729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.060743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.060888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.060903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.061969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.061983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.062203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.062218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.062294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.062307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.803 [2024-07-12 11:44:29.062393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.803 [2024-07-12 11:44:29.062409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.803 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.062484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.062498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.062584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.062599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.062876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.062891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.062959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.062976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.063950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.063969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.064056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.064071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.064693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.064719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.064961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.064982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.065140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.065160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.065333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.065354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.065602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.065622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.065711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.065730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.065839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.065859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.065949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.065968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.066955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.066990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.067174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.067200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.067291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.067311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:42.804 [2024-07-12 11:44:29.067452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:42.804 [2024-07-12 11:44:29.067473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:42.804 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.067689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.067709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.067807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.067828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.067915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.067934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.068030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.068052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.068606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.068634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.068722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.068746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.068832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.068852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.069096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.069116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.069231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.069251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.069439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.069461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.069602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.069623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.069779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.069798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.069920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.069941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.070932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.070949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.071975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.071990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.072057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.072070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.080 qpair failed and we were unable to recover it. 00:38:43.080 [2024-07-12 11:44:29.072151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.080 [2024-07-12 11:44:29.072165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.072298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.072312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.072412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.072427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.072513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.072527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.072602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.072614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.072823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.072839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.072918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.072932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.073017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.073032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.073165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.073181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.073245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.073257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.073355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.073368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.073523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.073540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.073638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.073652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.073802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.073818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.074894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.074991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.075944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.075957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.076837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.076997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.077911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.077989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.078142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.078314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.078476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.078557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.078753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.078841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.078917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.078929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.079011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.079026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.081 qpair failed and we were unable to recover it. 00:38:43.081 [2024-07-12 11:44:29.079095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.081 [2024-07-12 11:44:29.079109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.079199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.079214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.079297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.079311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.079449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.079465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.079536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.079550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.079624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.079639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.079711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.079725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.079954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.079969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.080965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.080980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.081927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.081941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.082773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.082802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.082890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.082907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.083065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.083080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.083285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.083300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.083440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.083455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.083599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.083613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.083681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.083694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.083828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.083843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.083934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.083948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.084937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.084951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.085919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.085933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.082 [2024-07-12 11:44:29.086830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.082 qpair failed and we were unable to recover it. 00:38:43.082 [2024-07-12 11:44:29.086927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.086941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.087929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.087943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.088927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.088942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.089971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.089986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.090974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.090991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.091973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.091988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.092133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.092148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.092228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.092243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.092324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.092338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.092412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.092427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.083 qpair failed and we were unable to recover it. 00:38:43.083 [2024-07-12 11:44:29.092501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.083 [2024-07-12 11:44:29.092516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.092658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.092672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.092812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.092826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.092893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.092907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.092974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.092988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.093922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.093989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.094979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.094994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.095927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.095942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.096927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.096942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.097969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.097983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.098908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.098922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.099058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.099073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.099140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.099154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.099288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.084 [2024-07-12 11:44:29.099303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.084 qpair failed and we were unable to recover it. 00:38:43.084 [2024-07-12 11:44:29.099433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.099448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.099530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.099545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.099616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.099631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.099802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.099818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.099950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.099964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.100117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.100134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.100223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.100238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.100384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.100398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.100538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.100552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.100714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.100729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.100825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.100839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.101043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.101057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.101281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.101296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.101464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.101479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.101633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.101648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.101791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.101809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.101965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.101979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.102923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.102938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.103903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.103917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.104874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.104888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.105934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.105949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.106026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.106041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.106186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.106200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.106338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.106353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.106441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.106456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.106541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.106555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.106690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.106704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.085 [2024-07-12 11:44:29.106770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.085 [2024-07-12 11:44:29.106785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.085 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.106884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.106899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.107893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.107992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.108910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.108983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.109941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.109963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.110048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.110063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.110216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.110231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.110448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.110463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.110527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.110542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.110694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.110710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.110857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.110872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.110953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.110969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.111041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.111056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.111127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.111142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.111229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.111244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.111397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.111414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.111641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.111657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.111858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.111874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.111971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.111986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.112223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.112239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.112335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.112350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.112574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.112591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.112677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.112693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.112829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.112845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.113029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.113193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.113363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.113457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.113615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.113688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.113841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.086 [2024-07-12 11:44:29.113999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.086 [2024-07-12 11:44:29.114042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.086 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.114198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.114224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.114332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.114354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.114455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.114472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.114559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.114574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.114716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.114732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.114883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.114899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.114996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.115969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.115985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.116133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.116147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.116217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.116231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.116387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.116402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.116510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.116537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.116672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.116687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.116821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.116836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.116909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.116924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.117069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.117084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.117153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.117167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.117309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.117324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.117490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.117506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.117674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.117689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.117794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.117810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.117946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.117961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.118911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.118928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.119934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.119953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.120105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.120125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.120270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.120292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.120384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.120405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.120567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.120591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.120687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.120702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.120786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.120801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.120944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.120959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.121124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.121151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.121252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.121276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.121370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.121398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.121513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.121534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.121626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.121646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.121752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.121771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.087 [2024-07-12 11:44:29.121879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.087 [2024-07-12 11:44:29.121899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.087 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.122047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.122067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.122223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.122243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.122389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.122405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.122576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.122591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.122675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.122690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.122759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.122773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.122965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.122980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.123054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.123069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.123220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.123235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.123314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.123328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.123479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.123495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.123639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.123653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.123732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.123747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.123846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.123863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.124910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.124935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.125938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.125953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.126899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.126913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.127942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.127956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.128101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.128197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.128437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.128584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.128655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.128759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.128907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.128991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.129006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.129075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.129089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.129171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.129187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.129274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.129289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.129439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.129454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.129596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.088 [2024-07-12 11:44:29.129617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.088 qpair failed and we were unable to recover it. 00:38:43.088 [2024-07-12 11:44:29.129724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.129747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.129830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.129849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.129955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.129974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.130067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.130086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.130185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.130205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.130363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.130391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.130561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.130581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.130672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.130691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.130860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.130879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.130979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.130998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.131845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.131860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.132926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.132941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.133927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.133998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.134919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.134934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.089 [2024-07-12 11:44:29.135856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.089 [2024-07-12 11:44:29.135870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.089 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.135957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.135972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.136971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.136986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.137876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.137891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.138916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.138933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.139931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.139996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.140865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.140879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.141910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.141991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.142006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.142076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.142091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.142245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.142261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.142427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.142442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.142615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.142630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.142715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.142731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.142868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.142884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.143036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.143051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.143139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.143155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.143227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.143242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.143318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.143333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.143409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.090 [2024-07-12 11:44:29.143425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.090 qpair failed and we were unable to recover it. 00:38:43.090 [2024-07-12 11:44:29.143530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.143546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.143691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.143709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.143861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.143876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.143950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.143965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.144976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.144991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.145878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.145893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.146904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.146929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.147906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.147921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.148941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.148956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.149978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.149993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.150085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.150100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.150264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.150278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.150346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.150361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.150533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.150548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.150627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.150643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.150733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.091 [2024-07-12 11:44:29.150747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.091 qpair failed and we were unable to recover it. 00:38:43.091 [2024-07-12 11:44:29.150840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.150855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.150931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.150947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.151037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.151051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.151136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.151151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.151296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.151310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.151479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.151495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.151726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.151740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.151808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.151823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.151912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.151928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.152958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.152972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.153908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.153922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.154915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.154928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.155935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.155948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.156981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.156996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.157079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.157094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.157162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.157177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.157251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.157265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.157343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.157357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.157531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.092 [2024-07-12 11:44:29.157548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.092 qpair failed and we were unable to recover it. 00:38:43.092 [2024-07-12 11:44:29.157618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.157632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.157704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.157719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.157856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.157871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.158038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.158052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.158125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.158139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.158331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.158345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.158498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.158512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.158592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.158606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.158751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.158766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.158904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.158919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.159918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.159933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.160000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.160013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.160164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.160334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.160348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.160422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.160436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.160574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.160588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.160741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.160756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.160841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.160854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.161863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.161876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.162940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.162954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.163977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.163991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.164919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.164933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.165974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.165988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.093 [2024-07-12 11:44:29.166725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.093 qpair failed and we were unable to recover it. 00:38:43.093 [2024-07-12 11:44:29.166858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.166871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.166937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.166951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.167919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.167994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.168943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.168958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.169937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.169952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.170026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.170041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.170143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.170171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.170254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.170274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.170420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.170440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.170525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.170545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.170734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.170755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.170851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.170872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.171977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.171992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.172143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.172157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.172291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.172306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.172394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.172408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.172501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.172516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.172651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.172665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.172750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.172764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.172917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.172931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.173102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.173118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.173310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.173325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.173408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.173423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.173572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.173586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.173655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.173670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.173749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.173764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.173832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.173846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.174032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.174047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.174195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.174209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.174275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.174290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.174365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.174393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.174597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.174612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.174698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.174712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.174912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.174927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.175071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.175085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.175156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.175171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.175255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.094 [2024-07-12 11:44:29.175270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.094 qpair failed and we were unable to recover it. 00:38:43.094 [2024-07-12 11:44:29.175385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.175427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.175540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.175566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.175653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.175676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.175760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.175775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.175845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.175859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.175926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.175940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.176978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.176993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.177915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.177930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.178933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.178946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.179861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.179875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.180945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.180959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.181980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.181995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.182985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.182999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.183065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.183079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.183220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.183234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.183303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.183318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.183394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.183408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.183476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.095 [2024-07-12 11:44:29.183490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.095 qpair failed and we were unable to recover it. 00:38:43.095 [2024-07-12 11:44:29.183626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.183641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.183707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.183727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.183816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.183838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.183922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.183947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.184964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.184977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.185947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.185961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.186838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.186852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.187985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.187999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.188974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.188993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.189881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.189896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.190863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.190877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.191013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.191028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.191093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.191107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.191180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.191194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.096 qpair failed and we were unable to recover it. 00:38:43.096 [2024-07-12 11:44:29.191329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.096 [2024-07-12 11:44:29.191343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.191421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.191435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.191513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.191528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.191596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.191610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.191676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.191691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.191769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.191783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.191867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.191882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.191954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.191968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.192962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.192981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.193064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.193079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.193212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.193227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.193403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.193417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.193485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.193499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.193633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.193647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.193805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.193821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.193903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.193922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.194974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.194988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.195887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.195902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.196941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.196956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.197974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.197988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.198937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.198950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.199096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.199313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.199426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.199590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.199668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.199756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.199842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.199996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.200014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.200092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.200107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.200178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.097 [2024-07-12 11:44:29.200193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.097 qpair failed and we were unable to recover it. 00:38:43.097 [2024-07-12 11:44:29.200277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.200372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.200465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.200547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.200642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.200737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.200841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.200951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.200966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.201924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.201988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.202981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.202994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.203964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.203979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.204139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.204153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.204240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.204254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.204404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.204418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.204566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.204580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.204666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.204681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.204842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.204856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.204932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.204946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.205893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.205908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.206960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.206975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.207070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.207170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.207266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.207434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.207614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.207730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.207899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.207989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.208100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.208214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.208313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.208421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.208508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.208595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.098 [2024-07-12 11:44:29.208752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.098 [2024-07-12 11:44:29.208766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.098 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.208840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.208855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.208932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.208947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.209945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.209959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.210978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.210992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.211944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.211958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.212026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.212040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.212177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.212190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.212265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.212278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.212356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.212371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.212541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.212556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.212627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.212641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.212799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.212813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.213935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.213950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.214020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.214034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.214257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.214272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.214356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.214373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.214553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.214568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.214640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.214655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.214808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.214822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.214971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.214985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.215062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.215076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.215228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.215242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.215394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.215409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.215584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.215601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.215684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.215699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.215769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.215783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.216906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.216921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.217080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.217095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.217163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.217177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.217323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.217337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.217481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.217496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.099 [2024-07-12 11:44:29.217643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.099 [2024-07-12 11:44:29.217657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.099 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.217805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.217820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.217907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.217922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.218853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.218867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.219945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.219960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.220920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.220986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.221937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.221952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.222912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.222926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.223844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.223858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.224921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.224934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.225863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.225878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.226014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.226028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.226115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.226129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.100 [2024-07-12 11:44:29.226190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.100 [2024-07-12 11:44:29.226203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.100 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.226281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.226295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.226450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.226464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.226616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.226631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.226717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.226731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.226803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.226818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.226899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.226913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.226987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.227924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.227938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.228854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.228870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.229906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.229921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.230942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.230955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.231949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.231973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.232076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.232176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.232271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.232434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.232611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.232808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.232910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.232997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.233929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.233943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.234026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.101 [2024-07-12 11:44:29.234040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.101 qpair failed and we were unable to recover it. 00:38:43.101 [2024-07-12 11:44:29.234108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.234210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.234360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.234456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.234615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.234716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.234800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.234962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.234976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.235982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.235996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.236100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.236251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.236340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.236486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.236579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.236659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.236837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.236997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.237132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.237236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.237322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.237489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.237684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.237777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.237854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.237868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.238923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.238938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.239875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.239890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.240919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.240933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.241971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.241985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.102 qpair failed and we were unable to recover it. 00:38:43.102 [2024-07-12 11:44:29.242902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.102 [2024-07-12 11:44:29.242917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.243983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.243997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.244977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.244991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.245058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.245072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.245208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.245222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.245362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.245376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.245533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.245548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.245624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.245649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.245736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.245756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.245977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.245997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.246217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.246236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.246399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.246419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.246508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.246528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.246604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.246620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.246679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.246693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.246784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.246799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.246874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.246889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.247922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.247936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.248921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.248990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.249968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.249982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.250059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.250073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.250177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.250199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.250355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.250383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.103 qpair failed and we were unable to recover it. 00:38:43.103 [2024-07-12 11:44:29.250500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.103 [2024-07-12 11:44:29.250520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.250610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.250630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.250787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.250808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.250890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.250909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.251980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.251994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.252857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.252871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.253924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.253938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.254944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.254958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.255981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.255996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.256964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.256978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.257954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.257968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.258971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.258985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.259061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.259075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.259141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.259156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.259237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.259251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.104 qpair failed and we were unable to recover it. 00:38:43.104 [2024-07-12 11:44:29.259332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.104 [2024-07-12 11:44:29.259346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.259414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.259429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.259571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.259585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.259654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.259668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.259743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.259758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.259827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.259841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.259913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.259928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.260946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.260961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.261913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.261928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.262910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.262925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.263930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.263945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.264909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.264923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.265957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.265971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.105 [2024-07-12 11:44:29.266898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.105 [2024-07-12 11:44:29.266913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.105 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.266988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.267963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.267979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.268939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.268954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.269940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.269955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.270942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.270956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.271025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.271039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.271123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.106 [2024-07-12 11:44:29.271137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.106 qpair failed and we were unable to recover it. 00:38:43.106 [2024-07-12 11:44:29.271231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.271245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.271325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.271339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.271505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.271520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.271590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.271604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.271741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.271755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.271825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.271840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.271925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.271940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.272968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.272983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.273974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.273989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.274960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.274975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.275060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.275074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.275208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.275223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.275290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.275305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.275442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.275457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.275518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.275531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.275619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.275633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.107 [2024-07-12 11:44:29.275696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.107 [2024-07-12 11:44:29.275710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.107 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.275780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.275795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.275872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.275886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.275965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.275980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.276903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.276917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.277844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.277995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.278856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.278870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.279862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.279877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.280006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.280020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.280117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.280131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.280201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.280216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.108 qpair failed and we were unable to recover it. 00:38:43.108 [2024-07-12 11:44:29.280282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.108 [2024-07-12 11:44:29.280296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.280390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.280405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.280495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.280510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.280581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.280596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.280676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.280690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.280878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.280892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.280974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.280988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.281979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.281993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.282837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.282852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.283872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.283886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.284056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.284070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.284141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.284155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.284221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.284236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.284323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.284347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.284464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.284493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.284597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.284618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.109 qpair failed and we were unable to recover it. 00:38:43.109 [2024-07-12 11:44:29.284707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.109 [2024-07-12 11:44:29.284728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.284808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.284827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.284910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.284930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.285037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.285056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.285200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.285227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.285384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.285404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.285555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.285574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.285747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.285767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.285872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.285892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.286954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.286969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.287908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.287996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.288955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.288969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.289119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.110 [2024-07-12 11:44:29.289135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.110 qpair failed and we were unable to recover it. 00:38:43.110 [2024-07-12 11:44:29.289214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.289228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.289306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.289328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.289419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.289441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.289537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.289557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.289663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.289682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.289763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.289783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.289998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.290911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.290924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.291942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.291956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.292090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.292104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.292172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.292190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.292344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.292358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.292523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.292537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.292613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.292627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.292776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.292790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.292869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.292883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.293931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.293946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.294015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.294034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.111 [2024-07-12 11:44:29.294118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.111 [2024-07-12 11:44:29.294132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.111 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.294935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.294949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.295945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.295960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.296949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.296963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.297905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.297919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.298096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.298110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.298194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.298208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.298277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.298292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.298409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.298424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.112 qpair failed and we were unable to recover it. 00:38:43.112 [2024-07-12 11:44:29.298579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.112 [2024-07-12 11:44:29.298602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.298778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.298801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.298911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.298936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.299898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.299912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.300940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.300954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.301917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.301988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.302148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.302306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.302396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.302498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.302673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.302821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.302903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.302918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.303056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.303071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.303167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.113 [2024-07-12 11:44:29.303188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.113 qpair failed and we were unable to recover it. 00:38:43.113 [2024-07-12 11:44:29.303288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.303309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.303402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.303427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.303508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.303523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.303606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.303620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.303756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.303770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.303837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.303851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.303950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.303964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.304947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.304961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.305918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.305933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.306892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.114 [2024-07-12 11:44:29.306906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.114 qpair failed and we were unable to recover it. 00:38:43.114 [2024-07-12 11:44:29.307050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.307139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.307225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.307306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.307471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.307579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.307809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.307924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.307940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.308935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.308950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.309927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.309942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.310982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.310997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.311062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.311076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.311151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.311166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.311240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.311254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.311320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.311334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.311401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.311414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.115 [2024-07-12 11:44:29.311477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.115 [2024-07-12 11:44:29.311492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.115 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.311555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.311570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.311645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.311659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.311732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.311748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.311889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.311905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.311984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.311999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.312882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.312896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.313957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.313971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.314897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.314911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.116 qpair failed and we were unable to recover it. 00:38:43.116 [2024-07-12 11:44:29.315886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.116 [2024-07-12 11:44:29.315901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.316983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.316997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.317887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.317901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.318045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.318206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.318356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.318594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.318709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.318810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.318914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.318986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.319922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.319935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.320020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.320034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.320112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.320126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.320201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.320216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.117 qpair failed and we were unable to recover it. 00:38:43.117 [2024-07-12 11:44:29.320281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.117 [2024-07-12 11:44:29.320294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.320361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.320375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.320462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.320479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.320572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.320586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.320653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.320667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.320804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.320819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.320888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.320902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.321920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.321934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.322903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.322928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.118 [2024-07-12 11:44:29.323874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.118 qpair failed and we were unable to recover it. 00:38:43.118 [2024-07-12 11:44:29.323946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.323960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.324916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.324930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.325924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.325991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.326955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.326970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.119 [2024-07-12 11:44:29.327908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.119 [2024-07-12 11:44:29.327922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.119 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.327986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.327999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.328893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.328906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.329906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.329994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.330009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.330077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.330091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.330159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.330173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.330314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.330330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.330465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.330479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.330679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.330694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.330899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.330914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.330990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.331943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.331958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.332028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.332042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.332137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.332152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.332219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.332234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.120 qpair failed and we were unable to recover it. 00:38:43.120 [2024-07-12 11:44:29.332314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.120 [2024-07-12 11:44:29.332328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.332399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.332414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.332481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.332494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.332568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.332582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.332674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.332688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.332761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.332776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.332859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.332876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.333951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.333965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.334923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.334939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.335968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.335982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.336074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.336089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.336220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.336239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.336403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.336417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.336486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.336500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.336572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.121 [2024-07-12 11:44:29.336586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.121 qpair failed and we were unable to recover it. 00:38:43.121 [2024-07-12 11:44:29.336666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.336680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.336758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.336772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.336915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.336929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.337927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.337942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.338898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.338912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.339982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.339996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.340084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.340110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.340190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.340211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.340302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.340322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.340469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.122 [2024-07-12 11:44:29.340490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.122 qpair failed and we were unable to recover it. 00:38:43.122 [2024-07-12 11:44:29.340576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.340596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.340685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.340706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.340788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.340804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.340875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.340890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.341881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.341895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.342947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.342961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.343943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.343957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.344869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.344884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.345060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.345074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.345141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.345155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.345229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.345242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.345310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.345324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.123 [2024-07-12 11:44:29.345405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.123 [2024-07-12 11:44:29.345421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.123 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.345556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.345570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.345646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.345661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.345727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.345740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.345877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.345892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.345962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.345976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.346921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.346935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.347930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.347995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.348924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.348948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.349924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.349938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.124 [2024-07-12 11:44:29.350141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.124 [2024-07-12 11:44:29.350156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.124 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.350239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.350253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.350327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.350343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.350417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.350430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.350503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.350516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.350599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.350613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.350756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.350769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.350833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.350847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.351914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.351929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.352975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.352988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.353978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.353993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.125 qpair failed and we were unable to recover it. 00:38:43.125 [2024-07-12 11:44:29.354863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.125 [2024-07-12 11:44:29.354878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.354946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.354959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.355919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.355987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.356978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.356992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.357960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.357976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.358046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.358061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.358199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.358214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.358384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.358398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.358473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.358488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.358564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.358578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.358735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.358750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.126 [2024-07-12 11:44:29.358893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.126 [2024-07-12 11:44:29.358910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.126 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.358990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.359921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.359935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.360957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.360971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.361968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.361990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.362930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.362944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.363076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.363091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.363167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.363182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.363245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.363259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.363341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.363356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.363433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.363447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.363544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.363558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.127 qpair failed and we were unable to recover it. 00:38:43.127 [2024-07-12 11:44:29.363629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.127 [2024-07-12 11:44:29.363644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.363716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.363730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.363805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.363819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.363886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.363901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.363970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.363984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.364945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.364959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.365948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.365978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.366102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.366140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.366288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.366328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.366474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.366521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.366648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.366688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.366841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.366881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.367953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.367972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.368050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.368070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.368163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.368184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.368264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.368278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.368428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.368462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.368594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.368632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.368820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.128 [2024-07-12 11:44:29.368858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.128 qpair failed and we were unable to recover it. 00:38:43.128 [2024-07-12 11:44:29.369045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.369085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.369222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.369260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.369451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.369494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.369619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.369659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.369866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.369905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.370966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.370981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.371141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.371180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.371302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.371341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.371558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.371604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.371809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.371853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.371992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.372037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.372196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.372238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.372369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.372399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.372602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.372643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.372790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.372832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.372974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.373015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.373209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.373250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.373463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.373506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.373653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.373695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.373911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.373952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.374104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.374145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.374271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.374312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.374469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.374511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.374670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.374712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.374926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.374968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.375162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.375182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.375344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.375395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.375683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.375725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.375850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.375891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.376084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.376124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.376269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.376289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.376438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.376479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.376612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.376653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.376798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.376839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.376966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.377007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.377284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.377304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.377411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.377431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.129 qpair failed and we were unable to recover it. 00:38:43.129 [2024-07-12 11:44:29.377617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.129 [2024-07-12 11:44:29.377657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.377801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.377843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.377991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.378169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.378290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.378398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.378513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.378665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.378748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.378860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.378899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.379090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.379131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.379317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.379366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.379455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.379470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.379557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.379571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.379710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.379751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.379965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.380011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.380145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.380192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.380337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.380351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.380433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.380468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.380598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.380639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.380841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.380881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.381001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.381042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.381246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.381287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.381560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.381574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.381640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.381654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.381733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.381748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.381855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.381895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.382082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.382123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.382265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.382304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.382585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.382599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.382684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.382724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.382845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.382885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.383016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.383056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.383185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.383225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.383350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.383364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.383435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.383461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.383532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.383584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.383708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.383749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.383876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.383917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.130 qpair failed and we were unable to recover it. 00:38:43.130 [2024-07-12 11:44:29.384034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.130 [2024-07-12 11:44:29.384082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.384168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.384183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.384319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.384334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.385137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.385162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.385337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.385352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.385434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.385448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.385695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.385738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.386042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.386084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.386224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.386266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.386393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.386411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.386474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.386487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.386578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.386617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.386842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.386883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.387067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.387108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.387249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.387289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.387477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.387492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.388579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.388609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.388720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.388735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.388911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.388953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.389081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.389122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.389251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.389292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.389431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.389446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.389609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.389624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.389742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.389756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.389829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.389844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.389924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.389937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.390844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.390857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.391894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.391920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.392092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.392107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.392248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.392262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.392347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.392362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.392501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.392516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.392668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.392708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.392844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.392883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.393072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.393113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.393371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.393390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.393479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.393493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.131 qpair failed and we were unable to recover it. 00:38:43.131 [2024-07-12 11:44:29.393573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.131 [2024-07-12 11:44:29.393586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.393834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.393874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.394028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.394073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.394282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.394323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.394475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.394517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.394762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.394802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.394999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.395040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.395173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.395212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.395421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.395462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.395682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.395722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.395867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.395906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.396203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.396244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.396414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.396463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.396665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.396680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.396817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.396831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.396997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.397034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.397337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.397387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.397595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.397636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.397787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.397827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.398081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.398122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.398321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.398335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.399652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.399676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.399776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.399791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.399933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.399947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.400090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.400104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.401447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.401472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.401645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.401660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.401881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.401895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.402060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.402075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.402158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.402203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.402411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.402454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.402603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.402644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.402793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.402835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.403038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.403078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.403206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.403245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.403425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.403440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.403535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.403548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.403645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.403658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.403792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.403831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.404038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.404078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.404215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.404254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.404524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.404539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.404781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.404821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.405022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.405062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.132 qpair failed and we were unable to recover it. 00:38:43.132 [2024-07-12 11:44:29.405203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.132 [2024-07-12 11:44:29.405217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.405296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.405308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.405526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.405568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.405707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.405747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.405946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.406256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.406297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.406423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.406465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.406662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.406701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.406859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.406904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.407033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.407072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.407262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.407304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.407563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.407579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.407656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.407668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.407749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.407763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.407895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.407909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.408088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.408104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.408278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.408317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.408463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.408504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.408643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.408682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.408806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.408847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.408967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.409007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.409205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.409245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.409498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.409537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.409739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.409779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.409972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.410012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.410135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.410176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.410437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.410452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.410584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.410599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.410748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.410762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.410866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.410881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.411040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.411080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.411208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.411248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.411454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.411496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.411741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.411755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.411844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.411857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.411990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.412005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.412076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.412089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.412786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.412811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.413028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.413043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.413268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.413282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.413961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.413986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.414101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.414115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.414315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.414604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.414646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.133 [2024-07-12 11:44:29.414864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.133 [2024-07-12 11:44:29.414904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.133 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.415156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.415196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.415348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.415441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.415637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.415652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.415734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.415749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.415824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.415842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.415981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.415995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.416145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.416160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.416301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.416316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.416519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.416534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.416690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.416705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.416795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.416808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.416889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.416902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.417964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.417979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.418080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.418133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.418261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.418301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.418458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.418500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.418700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.418741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.418867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.418908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.419096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.419136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.419336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.419350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.419432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.419447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.419516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.419529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.419598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.419611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.419687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.419700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.419841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.419878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.420001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.420042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.420195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.420235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.420431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.420446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.420544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.420557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.420695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.134 [2024-07-12 11:44:29.420709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.134 qpair failed and we were unable to recover it. 00:38:43.134 [2024-07-12 11:44:29.420803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.420816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.135 [2024-07-12 11:44:29.420884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.420897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.135 [2024-07-12 11:44:29.420993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.421032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.135 [2024-07-12 11:44:29.421161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.421201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.135 [2024-07-12 11:44:29.421329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.421369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.135 [2024-07-12 11:44:29.421509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.421554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.135 [2024-07-12 11:44:29.421619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.421632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.135 [2024-07-12 11:44:29.422345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.135 [2024-07-12 11:44:29.422370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.135 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.422572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.422588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.422728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.422770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.423890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.423918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.424032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.424076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.424281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.424323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.424600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.424658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.424948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.425031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.425205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.425252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.425533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.425555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.425633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.418 [2024-07-12 11:44:29.425652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.418 qpair failed and we were unable to recover it. 00:38:43.418 [2024-07-12 11:44:29.425884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.425904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.426827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.426854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.427037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.427222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.427389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.427550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.427664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.427765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.427876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.427982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.428097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.428199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.428389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.428577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.428695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.428862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.428968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.428985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.429077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.429159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.429307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.429411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.429516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.429597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.429836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.429983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.430851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.430865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.419 [2024-07-12 11:44:29.431013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.419 [2024-07-12 11:44:29.431029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.419 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.431163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.431178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.431258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.431273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.431413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.431428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.431538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.431553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.431626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.431639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.431705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.431718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.431875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.431890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.432897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.432938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.433929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.433942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.434985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.434999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.420 [2024-07-12 11:44:29.435963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.420 [2024-07-12 11:44:29.435977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.420 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.436967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.436981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.437115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.437130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.437279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.437294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.437376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.437397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.437479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.437493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.437598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.437625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.437797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.437822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.437920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.437943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.438944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.438957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.439925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.439939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.421 [2024-07-12 11:44:29.440957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.421 [2024-07-12 11:44:29.440970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.421 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.441043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.441056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.441198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.441213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.441362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.441376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.441460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.441476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.441611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.441626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.441702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.441715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.441786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.441799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.442028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.442043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.442213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.442228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.442362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.442383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.442460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.442476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.442697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.442712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.442964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.442989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.443984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.443997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.444927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.444941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.445079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.445095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.445229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.445247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.445326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.445340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.445416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.445431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.445519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.445533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.445603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.445617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.422 [2024-07-12 11:44:29.445746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.422 [2024-07-12 11:44:29.445761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.422 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.445840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.445854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.446017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.446031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.446166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.446181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.446316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.446332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.446408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.446424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.446561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.446575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.446711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.446726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.446859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.446874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.447118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.447133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.447262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.447277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.447366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.447390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.447528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.447547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.447685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.447699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.447854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.447869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.448962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.448976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.449129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.449144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.449292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.449306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.449382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.449398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.449535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.449549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.449681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.449695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.449849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.449863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.449996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.450157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.450254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.450335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.450496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.450582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.450675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.423 qpair failed and we were unable to recover it. 00:38:43.423 [2024-07-12 11:44:29.450758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.423 [2024-07-12 11:44:29.450772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.450909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.450923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.451876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.451891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.452908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.452922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.453057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.453073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.453210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.453224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.453425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.453440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.453506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.453519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.453751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.453765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.453834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.453847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.453939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.453953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.454981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.454996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.455089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.455104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.455274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.455289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.455388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.455403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.455643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.455659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.455743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.455758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.455905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.455920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.456002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.424 [2024-07-12 11:44:29.456017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.424 qpair failed and we were unable to recover it. 00:38:43.424 [2024-07-12 11:44:29.456126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.456141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.456293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.456312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.456416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.456432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.456617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.456633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.456725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.456740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.456941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.456957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.457914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.457928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.458131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.458147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.458282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.458297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.458443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.458458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.458601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.458615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.458731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.458748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.458901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.458917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.459019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.459167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.459350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.459453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.459540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.459726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.459908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.459989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.460021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.460095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.460109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.460177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.460191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.460346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.460360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.460582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.460610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.460769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.460784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.460958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.460984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.461128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.461304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.461430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.461512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.461660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.461756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.425 [2024-07-12 11:44:29.461837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.425 qpair failed and we were unable to recover it. 00:38:43.425 [2024-07-12 11:44:29.461920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.461935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.462073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.462087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.462291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.462306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.462392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.462424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.462524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.462538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.462611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.462626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.462836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.462851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.462935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.463118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.463209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.463358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.463454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.463653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.463807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.463885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.463976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.464002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.464151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.464166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.464256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.464273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.464431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.464447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.464531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.464545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.464748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.464775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.464860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.464874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.465979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.465993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.466949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.466964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.467113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.467127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.467217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.426 [2024-07-12 11:44:29.467232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.426 qpair failed and we were unable to recover it. 00:38:43.426 [2024-07-12 11:44:29.467318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.467332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.467400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.467431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.467585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.467600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.467684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.467699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.467774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.467788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.467885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.467899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.468914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.468994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.469960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.469975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.470972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.470987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.471147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.471162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.471304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.471319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.427 [2024-07-12 11:44:29.471466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.427 [2024-07-12 11:44:29.471481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.427 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.471559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.471572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.471714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.471729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.471878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.471892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.472027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.472042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.472122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.472136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.472197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.472210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.472409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.472424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.472559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.472573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.472644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.472658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.472741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.472755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.473008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.473022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.473096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.473109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.473250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.473264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.473429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.473461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.473694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.473709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.473791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.473806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.473942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.473956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.474043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.474058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.474258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.474272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.474370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.474420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.474488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.474502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.474642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.474657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.474793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.474810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.475973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.475988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.476138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.476153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.476287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.476302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.476466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.476481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.476557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.476570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.476774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.476788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.476873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.476888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.477039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.477054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.477152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.477166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.428 [2024-07-12 11:44:29.477253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.428 [2024-07-12 11:44:29.477267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.428 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.477358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.477373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.477538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.477553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.477637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.477651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.477729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.477744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.477885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.477900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.478982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.478997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.479139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.479154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.479295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.479310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.479387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.479401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.479541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.479555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.479698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.479713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.479783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.479800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.479959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.479974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.480946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.480960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.481056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.481071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.481203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.481218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.481362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.481381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.481512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.481526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.481611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.481626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.481843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.481857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.481943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.481958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.482031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.482049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.482112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.482125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.482275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.482289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.482516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.482531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.429 qpair failed and we were unable to recover it. 00:38:43.429 [2024-07-12 11:44:29.482627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.429 [2024-07-12 11:44:29.482642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.482776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.482791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.482866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.482881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.483973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.483987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.484135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.484149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.484300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.484314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.484473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.484488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.484575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.484589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.484649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.484661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.484818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.484833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.484923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.484938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.485975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.485990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.486918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.486995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.487104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.487278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.487429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.487517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.487611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.487771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.430 qpair failed and we were unable to recover it. 00:38:43.430 [2024-07-12 11:44:29.487936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.430 [2024-07-12 11:44:29.487951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.488889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.488912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.489980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.489995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.490889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.490904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.431 [2024-07-12 11:44:29.491954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.431 [2024-07-12 11:44:29.491968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.431 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.492037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.492050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.492129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.492143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.492276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.492291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.492443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.492458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.492619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.492634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.492790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.492805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.492960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.492974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.493905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.493929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.494981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.494995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.495067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.495080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.495161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.495178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.495337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.495350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.495483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.495500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.495573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.495586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.495754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.495767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.495936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.495950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.496107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.496121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.496230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.496243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.496404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.496419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.496658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.496673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.496882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.496896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.496999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.497013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.497152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.497167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.497296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.497313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.432 [2024-07-12 11:44:29.497395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.432 [2024-07-12 11:44:29.497409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.432 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.497483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.497497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.497591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.497604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.497734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.497748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.497824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.497837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.497974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.497989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.498194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.498208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.498370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.498388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.498539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.498555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.498629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.498642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.498786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.498799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.498892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.498906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.499040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.499054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.499139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.499152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.499296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.499310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.499470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.499493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.499589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.499607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.499697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.499715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.499809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.499828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.500913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.500926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.501935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.501948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.502085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.502099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.502174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.502188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.502259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.502272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.502346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.502359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.433 [2024-07-12 11:44:29.502467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.433 [2024-07-12 11:44:29.502480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.433 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.502622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.502636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.502711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.502725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.502816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.502829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.502964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.502978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.503939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.503953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.504097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.504110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.504369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.504397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.504496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.504516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.504683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.504706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.504798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.504812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.504948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.504963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.505913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.505928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.506057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.506073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.506214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.506229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.506363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.506385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.506531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.506546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.506624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.506638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.506785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.506799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.506976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.506990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.507145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.507159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.507317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.507332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.507453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.507468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.507613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.507628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.507851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.507866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.507943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.507956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.508043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.508057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.434 [2024-07-12 11:44:29.508201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.434 [2024-07-12 11:44:29.508216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.434 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.508282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.508296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.508388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.508401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.508545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.508560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.508663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.508678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.508746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.508759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.508960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.508975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.509047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.509061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.509310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.509330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.509471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.509486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.509561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.509574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.509667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.509682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.509748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.509761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.509835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.509856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.510955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.510975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.511134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.511154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.511308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.511327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.511492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.511513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.511658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.511675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.511823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.511840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.512002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.512017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.512098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.512111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.512194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.512208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.512345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.512359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.512502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.512517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.512703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.512718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.512960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.512975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.513126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.513141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.513367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.513386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.435 [2024-07-12 11:44:29.513558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.435 [2024-07-12 11:44:29.513573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.435 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.513725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.513740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.513942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.513956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.514162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.514176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.514325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.514341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.514565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.514579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.514740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.514755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.514835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.514848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.514996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.515010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.515092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.515107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.515343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.515358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.515473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.515489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.515682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.515696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.515824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.515838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.515968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.515983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.516215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.516230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.516371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.516391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.516603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.516625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.516729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.516754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.516928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.516950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.517028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.517042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.517210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.517225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.517371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.517390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.517464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.517478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.517570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.517585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.517652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.517665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.517850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.517865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.518044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.518059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.518158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.518330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.518345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.518510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.518528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.518669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.518683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.518921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.518936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.519018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.519033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.519199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.519214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.519391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.519406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.519655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.519670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.519804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.519818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.519967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.519981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.520133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.520147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.436 [2024-07-12 11:44:29.520346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.436 [2024-07-12 11:44:29.520360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.436 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.520538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.520556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.520714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.520729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.520873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.520887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.520979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.520994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.521124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.521138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.521283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.521298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.521436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.521451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.521528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.521543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.521680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.521695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.521768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.521781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.522028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.522047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.522270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.522286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.522520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.522737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.522751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.522966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.522981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.523212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.523226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.523527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.523544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.523612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.523625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.523758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.523772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.523869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.523883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.523987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.524920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.524935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.525148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.525163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.525236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.525249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.525424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.525438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.525569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.525585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.525697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.525711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.525843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.525857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.525942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.525956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.526034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.526047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.526303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.526317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.526447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.526462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.526595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.526610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.437 [2024-07-12 11:44:29.526807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.437 [2024-07-12 11:44:29.526821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.437 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.526974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.526988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.527069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.527084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.527267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.527282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.527438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.527452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.527672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.527687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.527896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.527911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.528182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.528197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.528426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.528442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.528573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.528588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.528732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.528746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.528906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.528921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.529072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.529087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.529259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.529274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.529416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.529431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.529613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.529628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.529878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.529895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.530043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.530058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.530285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.530300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.530505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.530520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.530721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.530736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.530935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.530950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.531121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.531135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.531307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.531322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.531479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.531494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.531717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.531732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.531961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.531975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.532145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.532159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.532305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.532319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.532589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.532604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.532849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.532864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.533112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.533126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.533279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.533293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.533499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.533515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.533657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.533678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.533811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.533827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.533961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.533976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.534155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.534171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.534344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.534359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.534587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.534602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.534770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.534785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.438 [2024-07-12 11:44:29.534930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.438 [2024-07-12 11:44:29.534945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.438 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.535078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.535093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.535238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.535252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.535395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.535410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.535557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.535572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.535707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.535722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.535896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.535910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.536111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.536126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.536332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.536347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.536570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.536585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.536739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.536753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.536835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.536849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.537049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.537064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.537267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.537282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.537445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.537460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.537615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.537631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.537833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.537847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.538013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.538027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.538109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.538122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.538292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.538308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.538459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.538474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.538673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.538687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.538861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.538875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.539073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.539088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.539323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.539345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.539514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.539529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.539699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.539714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.539968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.539982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.540134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.540148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.540382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.540397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.540598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.540612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.540830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.540844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.540998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.541013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.541167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.541181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.541413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.541428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.541566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.541580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.541785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.541799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.541941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.541956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.542175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.542189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.542347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.542362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.542524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.439 [2024-07-12 11:44:29.542539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.439 qpair failed and we were unable to recover it. 00:38:43.439 [2024-07-12 11:44:29.542736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.542750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.542971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.542985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.543217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.543232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.543480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.543495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.543693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.543709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.543883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.543901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.544059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.544074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.544274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.544289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.544508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.544523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.544683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.544698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.544855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.544870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.544964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.544978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.545209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.545224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.545449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.545464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.545630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.545647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.545801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.545816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.545950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.545970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.546189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.546204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.546355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.546369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.546605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.546620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.546791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.546807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.547062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.547076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.547233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.547248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.547323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.547337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.547489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.547505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.547591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.547605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.547812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.547827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.547959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.547973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.548151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.548166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.548315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.548329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.548495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.548510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.548723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.548738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.548911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.548925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.549074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.549088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.549236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.549250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.440 qpair failed and we were unable to recover it. 00:38:43.440 [2024-07-12 11:44:29.549401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.440 [2024-07-12 11:44:29.549416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.549643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.549657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.549807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.549822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.550024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.550038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.550141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.550155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.550306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.550321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.550553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.550568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.550790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.550805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.551018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.551033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.551293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.551307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.551555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.551570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.551715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.551730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.551877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.551891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.552105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.552119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.552211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.552225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.552373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.552392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.552525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.552540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.552723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.552738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.552880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.552895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.553029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.553047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.553221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.553236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.553383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.553398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.553634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.553650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.553876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.553890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.554142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.554157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.554315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.554330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.554550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.554565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.554768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.554782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.554980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.554994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.555145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.555160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.555312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.555327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.555584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.555599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.555742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.555757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.556029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.556043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.556124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.556138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.556341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.556356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.556527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.556542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.556750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.556765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.556967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.556982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.557118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.557133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.557275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.557289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.441 qpair failed and we were unable to recover it. 00:38:43.441 [2024-07-12 11:44:29.557467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.441 [2024-07-12 11:44:29.557482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.557572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.557585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.557785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.557799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.558035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.558052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.558286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.558305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.558457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.558473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.558697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.558712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.558891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.558906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.559130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.559145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.559368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.559391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.559486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.559499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.559665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.559679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.559922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.559937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.560165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.560179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.560269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.560282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.560412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.560427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.560646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.560661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.560737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.560751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.560907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.560924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.561123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.561138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.561361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.561376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.561537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.561552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.561691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.561705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.561903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.561918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.562073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.562087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.562224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.562239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.562447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.562462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.562614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.562629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.562712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.562725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.562888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.562903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.562981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.562993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.563204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.563219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.563363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.563385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.563535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.563549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.563645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.563660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.563798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.563812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.564006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.564021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.564159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.564174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.564422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.564437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.564523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.564690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.442 [2024-07-12 11:44:29.564705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.442 qpair failed and we were unable to recover it. 00:38:43.442 [2024-07-12 11:44:29.564837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.564851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.564983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.564998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.565234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.565249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.565339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.565352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.565562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.565578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.565766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.565781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.565935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.565949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.566150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.566165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.566309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.566324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.566492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.566507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.566660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.566675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.566838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.566852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.567007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.567021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.567109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.567122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.567347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.567361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.567503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.567518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.567656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.567673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.567902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.567918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.568111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.568126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.568306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.568320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.568457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.568472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.568672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.568686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.568775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.568788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.568933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.568948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.569163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.569178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.569352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.569367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.569555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.569570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.569729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.569748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.569970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.569985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.570230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.570244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.570340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.570353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.570595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.570610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.570811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.570826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.570989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.571003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.571202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.571218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.571441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.571456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.571690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.571705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.571970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.571985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.572083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.572097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.572247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.572262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.572487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.443 [2024-07-12 11:44:29.572502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.443 qpair failed and we were unable to recover it. 00:38:43.443 [2024-07-12 11:44:29.572687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.572702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.572853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.572868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.573108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.573123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.573366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.573395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.573554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.573574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.573782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.573802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.573963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.573983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.574149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.574169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.574396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.574439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.574703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.574744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.575010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.575029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.575268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.575288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.575533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.575551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.575750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.575764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.575922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.575937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.576168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.576209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.576511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.576566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.576791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.576806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.576951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.576965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.577129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.577143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.577290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.577305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.577464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.577479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.577627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.577641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.577808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.577823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.578036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.578051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.578189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.578204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.578349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.578415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.578635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.578676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.578888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.578929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.579149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.579189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.579500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.579543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.579828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.579868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.580106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.580120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.580325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.580365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.580566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.580607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.580867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.444 [2024-07-12 11:44:29.580907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.444 qpair failed and we were unable to recover it. 00:38:43.444 [2024-07-12 11:44:29.581127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.581167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.581419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.581462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.581693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.581734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.581912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.581927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.582131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.582172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.582444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.582486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.582769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.582810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.583082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.583123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.583423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.583466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.583703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.583744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.584046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.584087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.584303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.584344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.584616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.584707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.585023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.585072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.585221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.585264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.585550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.585594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.585798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.585818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.585923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.585965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.586126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.586168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.586478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.586529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.586645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.586671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.586833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.586874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.587092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.587133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.587401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.587444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.587740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.587781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.587980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.588021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.588245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.588286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.588577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.588620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.588817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.588837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.588996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.589036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.589240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.589281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.589495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.589538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.589767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.589820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.590122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.590164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.590401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.590444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.590731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.590773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.590923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.590963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.591213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.591233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.591470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.591490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.591729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.591749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.591989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.445 [2024-07-12 11:44:29.592009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.445 qpair failed and we were unable to recover it. 00:38:43.445 [2024-07-12 11:44:29.592243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.592263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.592497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.592514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.592715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.592730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.592985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.593027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.593311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.593352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.593674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.593715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.593972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.594013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.594290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.594330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.594634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.594676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.594895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.594910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.595145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.595160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.595367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.595388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.595540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.595559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.595781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.595796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.595982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.596022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.596257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.596298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.596609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.596652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.596930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.596970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.597236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.597277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.597559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.597606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.597822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.597869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.598067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.598081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.598305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.598319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.598405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.598419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.598646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.598686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.598873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.598914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.599189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.599230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.599493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.599535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.599835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.599878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.600091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.600105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.600339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.600353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.600527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.600542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.600686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.600701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.600799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.600812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.601050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.601091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.601285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.601325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.601541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.601583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.601731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.601772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.601903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.601943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.602227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.602242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.602405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.446 [2024-07-12 11:44:29.602420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.446 qpair failed and we were unable to recover it. 00:38:43.446 [2024-07-12 11:44:29.602649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.602663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.602774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.602829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.603050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.603091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.603320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.603361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.603524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.603565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.603886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.603954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.604183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.604226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.604421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.604464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.604617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.604633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.604728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.604741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.605005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.605047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.605283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.605323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.605631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.605673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.605878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.605919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.606124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.606164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.606389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.606431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.606590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.606634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.606785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.606799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.606949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.607000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.607179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.607219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.607425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.607468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.607635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.607675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.607810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.607851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.608104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.608144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.608399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.608441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.608700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.608741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.608962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.609002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.609212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.609252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.609400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.609442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.609643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.609657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.609834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.609874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.610177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.610218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.610472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.610514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.610676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.610716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.610994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.611034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.611235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.611276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.611505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.611549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.611756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.611797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.612096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.612115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.612204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.612217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.447 [2024-07-12 11:44:29.612410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.447 [2024-07-12 11:44:29.612453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.447 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.612593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.612633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.612817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.612857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.613118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.613159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.613429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.613470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.613740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.613825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.614171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.614255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.614442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.614491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.614731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.614774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.614950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.614971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.615172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.615214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.615424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.615467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.615726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.615768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.615969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.616011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.616272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.616313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.616552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.616595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.616849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.616891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.617096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.617138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.617415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.617466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.617745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.617786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.618044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.618064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.618280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.618301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.618541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.618561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.618722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.618738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.618889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.618929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.619209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.619250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.619510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.619552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.619809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.619849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.620146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.620187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.620416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.620457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.620662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.620703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.620920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.620935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.621143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.621183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.621461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.621504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.621725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.448 [2024-07-12 11:44:29.621776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.448 qpair failed and we were unable to recover it. 00:38:43.448 [2024-07-12 11:44:29.621931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.621945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.622124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.622165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.622290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.622331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.622491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.622533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.622807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.622848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.623100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.623140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.623279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.623319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.623530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.623572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.623870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.623910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.624182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.624222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.624530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.624616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.625038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.625122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.625440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.625486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.625762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.625782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.626047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.626102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.626366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.626419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.626714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.626755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.627057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.627098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.627400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.627444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.627580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.627621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.627876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.627917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.628117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.628159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.628440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.628484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.628737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.628784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.629034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.629075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.629281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.629324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.629538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.629581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.629833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.629874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.630171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.630212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.630502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.630545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.630779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.630820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.631069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.631110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.631404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.631447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.631724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.631765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.631971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.632013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.632269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.632310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.632526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.632584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.632808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.632850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.633008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.633029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.633270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.633311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.633457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.449 [2024-07-12 11:44:29.633500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.449 qpair failed and we were unable to recover it. 00:38:43.449 [2024-07-12 11:44:29.633698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.633745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.633981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.634001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.634164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.634185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.634390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.634432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.634720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.634761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.635087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.635107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.635346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.635367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.635504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.635525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.635685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.635726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.635937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.635984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.636119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.636160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.636420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.636463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.636790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.636835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.637050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.637070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.637287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.637328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.637545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.637587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.637868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.637909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.638171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.638213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.638497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.638539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.638826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.638867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.639121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.639141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.639428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.639471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.639725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.639766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.639936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.639957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.640132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.640153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.640401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.640445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.640711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.640753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.641043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.641062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.641288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.641309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.641463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.641484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.641754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.641795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.641998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.642040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.642269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.642311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.642541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.642585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.642777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.642818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.643055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.643096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.643309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.643351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.643651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.643694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.643902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.643922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.644086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.644106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.644275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.450 [2024-07-12 11:44:29.644316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.450 qpair failed and we were unable to recover it. 00:38:43.450 [2024-07-12 11:44:29.644549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.644592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.644843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.644884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.645105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.645146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.645405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.645448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.645733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.645775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.645974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.646016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.646241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.646283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.646564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.646606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.646765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.646812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.647095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.647137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.647280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.647321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.647555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.647597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.647813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.647855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.648049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.648069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.648224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.648267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.648493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.648535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.648735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.648777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.648996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.649016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.649185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.649238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.649509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.649552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.649835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.649876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.650075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.650116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.650262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.650304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.650606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.650648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.650899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.650921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.651098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.651119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.651205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.651223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.651374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.651412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.651634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.651677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.651871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.651912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.652086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.652130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.652341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.652393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.652594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.652636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.652885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.652926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.653155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.653197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.653430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.653474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.653707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.653748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.653899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.653941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.654224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.654266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.654478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.654521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.654718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.451 [2024-07-12 11:44:29.654760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.451 qpair failed and we were unable to recover it. 00:38:43.451 [2024-07-12 11:44:29.655046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.655066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.655159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.655178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.655470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.655522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.655789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.655832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.656094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.656138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.656306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.656322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.656534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.656577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.656780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.656831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.657052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.657093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.657285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.657326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.657541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.657582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.657859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.657900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.658175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.658216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.658508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.658550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.658743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.658783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.659062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.659103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.659414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.659456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.659687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.659728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.659939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.659980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.660239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.660279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.660534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.660577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.660858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.660872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.661048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.661063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.661238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.661279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.661561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.661602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.661884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.661924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.662129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.662171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.662376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.662425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.662652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.662693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.662959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.662974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.663214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.663242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.663522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.663564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.663755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.663808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.664028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.664042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.664189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.664203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.664458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.664516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.664724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.664783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.664987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.665002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.452 qpair failed and we were unable to recover it. 00:38:43.452 [2024-07-12 11:44:29.665146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.452 [2024-07-12 11:44:29.665161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.665308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.665322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.665411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.665424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.665504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.665517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.665662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.665703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.665979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.666019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.666310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.666350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.666584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.666625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.666942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.666956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.667095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.667112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.667305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.667345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.667643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.667685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.667939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.667980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.668239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.668280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.668567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.668609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.668810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.668851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.669119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.669156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.669458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.669498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.669778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.670010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.670050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.670291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.670331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.670607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.670622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.670755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.670770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.671001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.671041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.671175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.671216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.671474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.671516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.671706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.671752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.672000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.672015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.672150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.672165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.672345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.672400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.672708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1181324 Killed "${NVMF_APP[@]}" "$@" 00:38:43.453 [2024-07-12 11:44:29.672748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.672982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.673022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.673245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.673260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.673401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.673417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:43.453 [2024-07-12 11:44:29.673633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.673648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:43.453 [2024-07-12 11:44:29.673854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.673871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.453 qpair failed and we were unable to recover it. 00:38:43.453 [2024-07-12 11:44:29.673972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.453 [2024-07-12 11:44:29.673985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:43.454 [2024-07-12 11:44:29.674207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.674223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:43.454 [2024-07-12 11:44:29.674426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.674488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.674630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.674672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.674828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.674872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.675087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.675102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.675252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.675268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.675472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.675487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.675556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.675569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.675792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.675807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.675944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.675959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.676141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.676182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.676418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.676459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.676650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.676692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.676882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.676896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.677158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.677199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.677417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.677461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.677677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.677718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.677989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.678003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.678173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.678188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.678339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.678405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.678614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.678656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.678929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.678944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.679189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.679203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.679298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.679311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.679497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.679539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.679746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.679841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.680123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.680164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.680446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.680489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.680741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.680781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.680993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.681035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1182259 00:38:43.454 [2024-07-12 11:44:29.681302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.681318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1182259 00:38:43.454 [2024-07-12 11:44:29.681483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.681499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:43.454 [2024-07-12 11:44:29.681668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.681709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1182259 ']' 00:38:43.454 [2024-07-12 11:44:29.681894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.681936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.682074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:43.454 [2024-07-12 11:44:29.682115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 [2024-07-12 11:44:29.682334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.682350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.454 qpair failed and we were unable to recover it. 00:38:43.454 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:43.454 [2024-07-12 11:44:29.682450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.454 [2024-07-12 11:44:29.682464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:43.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:43.455 [2024-07-12 11:44:29.682716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.682759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:43.455 [2024-07-12 11:44:29.682931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.682973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 11:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:43.455 [2024-07-12 11:44:29.683228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.683269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.683531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.683573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.684073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.684110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.684260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.684276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.684500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.684516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.684757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.684772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.685021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.685040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.685218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.685233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.685370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.685392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.685532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.685546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.685694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.685709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.685815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.685831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.685923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.685937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.686004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.686018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.686201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.686216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.686441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.686458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.686685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.686700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.686786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.686799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.687018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.687033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.687126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.687141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.687370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.687396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.687576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.687591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.687672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.687686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.687846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.687861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.688051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.688067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.688221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.688236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.688321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.688335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.688511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.688527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.688702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.688718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.688874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.688889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.689034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.689049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.689134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.689148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.689308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.689323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.689597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.689640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.689938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.689981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.690240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.690282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.455 [2024-07-12 11:44:29.690551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.455 [2024-07-12 11:44:29.690568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.455 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.690719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.690735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.690881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.690897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.691038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.691054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.691277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.691293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.691443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.691459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.691638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.691654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.691737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.691750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.691915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.691931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.692153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.692168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.692394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.692412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.692575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.692591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.692730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.692745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.692910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.692924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.693082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.693097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.693348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.693364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.693467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.693481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.693635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.693662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.693809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.693825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.694025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.694041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.694261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.694277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.694436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.694452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.694643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.694658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.694885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.694900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.694987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.695001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.695085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.695099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.695258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.695273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.695505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.695521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.695660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.695675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.695893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.695908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.696042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.696057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.696208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.696223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.696369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.696391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.696558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.696575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.696721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.696737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.696937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.696952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.697114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.697129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.697247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.697274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.697438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.697463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.697577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.697598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.697861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.697881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.456 [2024-07-12 11:44:29.698027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.456 [2024-07-12 11:44:29.698047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.456 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.698278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.698297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.698387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.698403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.698574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.698589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.698742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.698756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.698902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.698917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.699089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.699103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.699328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.699343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.699499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.699515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.699739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.699757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.699905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.699921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.700069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.700083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.700232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.700247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.700385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.700401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.700623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.700637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.700731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.700744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.700994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.701010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.701208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.701223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.701366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.701385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.701534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.701549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.701748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.701763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.701963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.701978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.702063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.702075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.702299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.702315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.702542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.702557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.702781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.702794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.702883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.702896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.703043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.703057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.703212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.703227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.703375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.703395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.703464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.703477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.703671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.703685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.703829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.703844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.703984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.703999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.704150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.704165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.704298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.457 [2024-07-12 11:44:29.704314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.457 qpair failed and we were unable to recover it. 00:38:43.457 [2024-07-12 11:44:29.704539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.704564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.704751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.704772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.704870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.704890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.704983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.704998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.705977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.705991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.706180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.706195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.706283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.706303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.706448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.706464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.706538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.706552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.706680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.706695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.706829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.706844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.707930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.707943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.708129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.708143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.708228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.708241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.708398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.708413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.708573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.708587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.708724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.708738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.708870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.708885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.708981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.708994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.709075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.709089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.709237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.709252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.709402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.709417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.709501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.709514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.709599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.709614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.709696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.709709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.709784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.709797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.710015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.458 [2024-07-12 11:44:29.710044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.458 qpair failed and we were unable to recover it. 00:38:43.458 [2024-07-12 11:44:29.710202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.710224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.710406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.710431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.710573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.710589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.710656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.710668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.710809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.710825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.710924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.710937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.711980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.711994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.712941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.712953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.713096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.713111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.713246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.713260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.713468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.713483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.713565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.713579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.713713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.713728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.713871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.713885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.713961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.713974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.714934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.714950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.715040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.715053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.715143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.715158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.715267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.715291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.715406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.459 [2024-07-12 11:44:29.715429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.459 qpair failed and we were unable to recover it. 00:38:43.459 [2024-07-12 11:44:29.715587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.715611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.715701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.715724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.715866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.715881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.716034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.716185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.716331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.716427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.716526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.716685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.716836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.716997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.717146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.717231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.717320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.717412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.717563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.717794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.717903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.717916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.718976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.718990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.719074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.719087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.719161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.719174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.719323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.719339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.719424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.719437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.719592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.719607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.719739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.719754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.719926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.719941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.460 [2024-07-12 11:44:29.720822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.460 qpair failed and we were unable to recover it. 00:38:43.460 [2024-07-12 11:44:29.720907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.720922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.721955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.721970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.722105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.722120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.722211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.722225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.722365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.722385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.722524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.722540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.722696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.722710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.722910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.722925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.723077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.723091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.723329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.723344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.723500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.723515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.723646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.723661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.723795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.723809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.723972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.723986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.724077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.724090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.724232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.724246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.724397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.724413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.724576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.724591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.724826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.724841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.724982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.724997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.725201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.725220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.725300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.725314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.725416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.725429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.725561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.725576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.725715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.725730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.725864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.725879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.725960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.725973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.461 [2024-07-12 11:44:29.726114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.461 [2024-07-12 11:44:29.726129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.461 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.726264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.726279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.726344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.726356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.726521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.726538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.726622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.726635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.726712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.726725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.726817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.726830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.726965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.726980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.727963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.727978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.728045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.728058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.728144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.728158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.728314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.728329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.728484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.728499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.728580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.728594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.728686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.728700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.728902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.728917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.729123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.729138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.729287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.729302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.729394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.729408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.729611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.729626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.729722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.729735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.729861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.729876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.729953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.729966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.730193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.462 [2024-07-12 11:44:29.730208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.462 qpair failed and we were unable to recover it. 00:38:43.462 [2024-07-12 11:44:29.730342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.730356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.730456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.730470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.730636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.730651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.730734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.730748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.730928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.730943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.731030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.731043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.731130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.731143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.731228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.731242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.731375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.731397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.731538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.731552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.731726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.731741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.731893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.731909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.732956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.732969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.733116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.733130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.733214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.733228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.733293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.733307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.733507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.733522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.733613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.733628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.733794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.733814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.733903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.733918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.734915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.734929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.735060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.735076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.463 qpair failed and we were unable to recover it. 00:38:43.463 [2024-07-12 11:44:29.735230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.463 [2024-07-12 11:44:29.735245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.735393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.735408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.735559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.735573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.735743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.735757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.735900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.735914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.735988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.736001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.736100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.736115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.736314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.736328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.736417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.736431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.736595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.736610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.736811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.736825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.736959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.736975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.737041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.737054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.737144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.737158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.737399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.737414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.737495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.737510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.737649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.737664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.737900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.737915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.737988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.738960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.738974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.739054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.739068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.739311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.739326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.739470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.739485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.739631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.739646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.739723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.739736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.739807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.739820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.739974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.739988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.740065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.740078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.464 qpair failed and we were unable to recover it. 00:38:43.464 [2024-07-12 11:44:29.740289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.464 [2024-07-12 11:44:29.740304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.740437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.740452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.740598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.740612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.740763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.740778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.740929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.740943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.741968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.741980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.742914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.742928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.743008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.743022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.743196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.743210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.743409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.743425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.743496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.743511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.743590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.743603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.743742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.743757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.743997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.465 [2024-07-12 11:44:29.744916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.465 qpair failed and we were unable to recover it. 00:38:43.465 [2024-07-12 11:44:29.744996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.745151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.745263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.745409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.745504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.745586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.745737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.745922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.745937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.746891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.746905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.747102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.747200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.747357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.747448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.747608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.747718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.747827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.747987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.748004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.748174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.466 [2024-07-12 11:44:29.748188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.466 qpair failed and we were unable to recover it. 00:38:43.466 [2024-07-12 11:44:29.748290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.748304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.748509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.748525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.748599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.748612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.748708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.748722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.748821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.748836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.748971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.748986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.749061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.749075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.749173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.749187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.749320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.749334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.749542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.749557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.749782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.749795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.749891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.749906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.750004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.750019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.750167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.750182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.750270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.750285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.750360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.750374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.750461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.750476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.750609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.750623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.750798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.750812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.751013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.467 [2024-07-12 11:44:29.751028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.467 qpair failed and we were unable to recover it. 00:38:43.467 [2024-07-12 11:44:29.751106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.751123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.751328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.751343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.751422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.751437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.751576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.751590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.751744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.751760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.751826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.751932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.751945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.752151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.752165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.752389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.752404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.752488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.752503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.752601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.752615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.752752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.752766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.752913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.752927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.753904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.753918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.754049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.754064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.754196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.754210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.754292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.754307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.754393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.754409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.754590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.754604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.754752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.754768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.754920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.754934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.755029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.755043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.755174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.755189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.755326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.755341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.755411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.755425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.753 qpair failed and we were unable to recover it. 00:38:43.753 [2024-07-12 11:44:29.755593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.753 [2024-07-12 11:44:29.755609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.755693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.755707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.755859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.755874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.755949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.755964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.756944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.756958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.757118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.757132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.757218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.757233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.757312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.757327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.757461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.757476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.757624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.757638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.757839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.757854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.757934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.757949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758842] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:43.754 [2024-07-12 11:44:29.758867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.758926] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:43.754 [2024-07-12 11:44:29.758955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.758970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.759122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.759201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.759415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.759571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.759675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.759771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.759920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.759993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.760008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.760207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.760222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.760294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.760309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.760389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.760404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.760565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.760582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.760733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.754 [2024-07-12 11:44:29.760747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.754 qpair failed and we were unable to recover it. 00:38:43.754 [2024-07-12 11:44:29.760822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.760838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.761010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.761024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.761227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.761241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.761328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.761341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.761472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.761486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.761632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.761647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.761792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.761808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.761882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.761896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.762028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.762044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.762177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.762192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.762401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.762416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.762565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.762580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.762656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.762669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.762807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.762822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.762959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.762975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.763054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.763068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.763214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.763228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.763373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.763404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.763497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.763522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.763717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.763738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.763831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.763850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.763947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.763967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.764145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.764165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.764266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.764287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.764391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.764411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.764496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.764520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.764601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.764617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.764833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.764849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.764936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.764950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.765098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.765113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.765260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.765275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.765409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.765425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.765512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.765526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.765658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.765673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.765759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.765773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.765917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.765931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.766068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.766083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.766178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.766192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.755 qpair failed and we were unable to recover it. 00:38:43.755 [2024-07-12 11:44:29.766393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.755 [2024-07-12 11:44:29.766410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.766486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.766500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.766649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.766665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.766747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.766761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.766846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.766860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.767114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.767128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.767205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.767220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.767363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.767389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.767525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.767540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.767675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.767690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.767776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.767790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.767934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.767949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.768024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.768039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.768172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.768186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.768337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.768352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.768518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.768534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.768672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.768687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.768843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.768857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.768924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.768938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.769130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.769145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.769284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.769298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.769388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.769403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.769553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.769568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.769705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.769720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.769887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.769902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.769967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.769980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.770073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.770087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.770149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.770170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.770252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.770266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.770402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.770417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.770628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.770643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.770811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.770826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.770901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.770914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.771113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.771128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.771199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.771212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.771460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.771474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.771560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.771575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.771709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.771724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.771871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.771886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.772046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.772060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.772194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.756 [2024-07-12 11:44:29.772209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.756 qpair failed and we were unable to recover it. 00:38:43.756 [2024-07-12 11:44:29.772304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.772319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.772411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.772424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.772636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.772651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.772786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.772801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.772896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.772911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.773108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.773123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.773195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.773209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.773347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.773361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.773442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.773457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.773613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.773628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.773695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.773708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.773858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.773874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.774023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.774038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.774268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.774282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.774513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.774527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.774669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.774683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.774820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.774834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.774983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.774998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.775095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.775109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.775243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.775257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.775412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.775426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.775504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.775517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.775688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.775703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.775794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.775808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.775951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.775966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.776166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.776181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.776336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.776353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.776428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.776441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.776592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.776606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.776691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.776705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.776849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.776864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.757 [2024-07-12 11:44:29.777016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.757 [2024-07-12 11:44:29.777031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.757 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.777169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.777183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.777315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.777329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.777462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.777477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.777558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.777572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.777712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.777729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.777804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.777817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.778915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.778993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.779077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.779208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.779304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.779407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.779577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.779742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.779965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.779980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.780055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.780068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.780136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.780150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.780297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.780310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.780393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.780407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.780567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.780581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.780714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.780728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.780877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.780891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.781089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.781104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.781241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.781256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.781340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.781353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.781452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.781466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.781618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.781631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.781766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.781782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.781871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.781885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.782093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.782108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.782267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.782281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.782490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.782506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.782654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.782669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.758 [2024-07-12 11:44:29.782745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.758 [2024-07-12 11:44:29.782758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.758 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.782960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.782975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.783930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.783945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.784146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.784160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.784317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.784331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.784488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.784502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.784644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.784659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.784806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.784822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.785885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.785900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.786119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.786133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.786300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.786315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.786402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.786416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.786568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.786583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.786664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.786677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.786746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.786760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.786861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.786877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.787949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.759 [2024-07-12 11:44:29.787962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.759 qpair failed and we were unable to recover it. 00:38:43.759 [2024-07-12 11:44:29.788115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.788130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.788275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.788294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.788365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.788384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.788521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.788536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.788694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.788708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.788791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.788805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.788868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.788881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.789876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.789890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.790118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.790132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.790226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.790241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.790372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.790392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.790474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.790488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.790622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.790636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.790774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.790788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.790933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.790949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.791907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.791997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.792872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.792886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.793020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.793035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.793103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.760 [2024-07-12 11:44:29.793116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.760 qpair failed and we were unable to recover it. 00:38:43.760 [2024-07-12 11:44:29.793216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.793230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.793324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.793337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.793418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.793432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.793519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.793533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.793678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.793693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.793837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.793852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.793941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.793956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.794100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.794115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.794340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.794354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.794425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.794439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.794640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.794670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.794816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.794831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.794967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.794982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.795140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.795154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.795238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.795252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.795397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.795412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.795549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.795564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.795659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.795673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.795808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.795823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.795913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.795928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.796072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.796088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.796291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.796305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.796456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.796471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.796568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.796584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.796654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.796672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.796839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.796855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.797082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.797098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.797232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.797253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.797407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.797422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.797628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.797643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.797833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.797851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.797916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.797929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.797995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.798012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.798145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.798160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.761 [2024-07-12 11:44:29.798310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.761 [2024-07-12 11:44:29.798325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.761 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.798410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.798425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.798493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.798506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.798677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.798692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.798921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.798936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.799106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.799121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.799274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.799288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.799359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.799374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.799515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.799530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.799666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.799680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.799826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.799841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.799919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.799933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.800032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.800046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.800144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.800160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.800294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.800309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.800447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.800462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.800668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.800682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.800770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.800785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.800859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.800872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.801021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.801036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.801166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.801181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.801320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.801335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.801545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.801560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.801715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.801729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.801865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.801879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.802025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.802051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.802219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.802233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.802321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.802337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.802478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.802493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.802633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.802647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.802803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.802818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.802926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.802941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.803097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.803112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.803268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.803282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.803444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.803459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.803525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.803538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.803738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.803752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.803893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.803908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.804074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.804091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.804249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.804264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.804400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.762 [2024-07-12 11:44:29.804416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.762 qpair failed and we were unable to recover it. 00:38:43.762 [2024-07-12 11:44:29.804488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.804502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.804609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.804624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.804851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.804866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.805027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.805041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.805131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.805145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.805240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.805254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.805455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.805470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.805650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.805665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.805832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.805847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.805929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.805944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.806037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.806194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.806303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.806536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.806617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.806766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.806914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.806991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.807004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.807159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.807174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.807254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.807269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.807512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.807527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.807687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.807701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.807851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.807866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.808069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.808084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.808181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.808205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.808318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.808344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.808509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.808531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.808681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.808700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.808793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.808813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.808894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.808913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.809060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.809077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.809223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.809245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.809447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.809477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.809574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.809595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.809690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.809706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.809852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.809867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.810004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.810018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.810157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.810175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.810245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.810259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.810342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.810357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.810469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.763 [2024-07-12 11:44:29.810484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.763 qpair failed and we were unable to recover it. 00:38:43.763 [2024-07-12 11:44:29.810556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.810570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.810776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.810791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.810930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.810945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.811085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.811101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.811178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.811193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.811327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.811342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.811547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.811563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.811647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.811662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.811888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.811902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.812980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.812995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.813930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.813950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.814097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.814117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.814202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.814222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.814297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.814316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.814420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.814436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.814639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.814653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.814729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.814748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.814885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.814899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.764 [2024-07-12 11:44:29.815862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.764 [2024-07-12 11:44:29.815878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.764 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.815944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.815958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.816091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.816106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.816249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.816264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.816418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.816434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.816578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.816593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.816740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.816754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.816820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.816833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.816969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.816983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.817086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.817101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.817311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.817326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.817408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.817427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.817582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.817597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.817769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.817784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.817923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.817937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.818090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.818107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.818256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.818271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.818421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.818436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.818539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.818553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.818701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.818720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.818853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.818868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.819026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.819053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.819208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.819223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.819309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.819332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.819504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.819531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.819780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.819803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.819890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.819906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.820972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.820987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.821079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.821094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.821179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.821202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.821298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.821313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.821404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.821419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.821518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.821532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.765 [2024-07-12 11:44:29.821607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.765 [2024-07-12 11:44:29.821622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.765 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.821722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.821736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.821897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.821911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.822054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.822218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.822369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.822483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.822595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.822755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.822920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.822990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.823005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.823172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.823186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.823258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.823271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.823341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.823354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.823519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.823535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.823762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.823777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.823863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.823878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.824012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.824027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.824172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.824187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.824335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.824350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.824441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.824456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.824547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.824561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.824640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.824654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.824829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.824852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.825977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.825990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.826074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.826090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 EAL: No free 2048 kB hugepages reported on node 1 00:38:43.766 [2024-07-12 11:44:29.826270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.766 [2024-07-12 11:44:29.826285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.766 qpair failed and we were unable to recover it. 00:38:43.766 [2024-07-12 11:44:29.826367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.826388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.826544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.826561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.826649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.826663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.826749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.826764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.826997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.827011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.827113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.827134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.827272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.827295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.827449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.827463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.827710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.827725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.827822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.827836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.828006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.828020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.828092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.828107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.828246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.828260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.828409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.828423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.828685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.828699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.828769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.828784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.828876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.828891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.829022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.829036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.829197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.829212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.829392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.829408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.829566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.829580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.829671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.829687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.829931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.829946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.830019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.830034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.830117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.830131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.830336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.830351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.830562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.830577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.830655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.830668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.830755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.830769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.830925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.830940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.831004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.831018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.831168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.831182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.831315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.831329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.831396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.831410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.831556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.831571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.831705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.831720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.831921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.831935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.832032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.832047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.832253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.832268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.832503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.767 [2024-07-12 11:44:29.832519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.767 qpair failed and we were unable to recover it. 00:38:43.767 [2024-07-12 11:44:29.832595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.832610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.832826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.832842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.832969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.832983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.833133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.833147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.833218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.833233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.833315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.833330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.833529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.833543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.833775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.833789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.833869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.833885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.833943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.833956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.834108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.834122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.834350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.834365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.834460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.834482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.834578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.834599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.834712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.834732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.834907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.834924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.835972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.835986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.836062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.836075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.836273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.836288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.836430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.836445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.836603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.836617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.836826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.836841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.836977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.836991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.837983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.837999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.838083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.838098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.838176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.838190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.768 [2024-07-12 11:44:29.838325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.768 [2024-07-12 11:44:29.838340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.768 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.838420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.838438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.838572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.838587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.838722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.838736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.838957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.838971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.839055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.839070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.839206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.839221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.839351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.839365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.839515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.839530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.839604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.839617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.839686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.839699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.839868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.839883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.840928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.840942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.841026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.841041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.841204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.841219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.841285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.841298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.841441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.841456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.841684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.841699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.841839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.841854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.842009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.842024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.842173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.842188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.842356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.842387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.842478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.842499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.842682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.842705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.842953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.842969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.843191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.843205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.843338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.843353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.843581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.843595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.843765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.843779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.843985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.844000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.844130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.844144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.844292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.844307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.844467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.844482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.844626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.844641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.769 qpair failed and we were unable to recover it. 00:38:43.769 [2024-07-12 11:44:29.844864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.769 [2024-07-12 11:44:29.844881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.845092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.845107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.845322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.845337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.845490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.845506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.845576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.845589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.845818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.845833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.846030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.846045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.846120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.846132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.846220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.846233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.846445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.846460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.846634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.846649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.846871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.846886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.847027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.847042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.847185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.847200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.847353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.847367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.847598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.847613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.847820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.847836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.847990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.848096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.848264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.848373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.848472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.848581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.848816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.848960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.848975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.849058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.849071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.849267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.849282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.849446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.849470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.849586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.849609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.849713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.849735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.849889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.849905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.850103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.850118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.850354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.850368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.850549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.770 [2024-07-12 11:44:29.850564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.770 qpair failed and we were unable to recover it. 00:38:43.770 [2024-07-12 11:44:29.850791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.850805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.850975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.850990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.851138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.851153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.851414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.851429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.851635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.851649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.851873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.851888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.852037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.852054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.852216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.852230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.852445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.852460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.852564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.852578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.852783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.852797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.853040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.853054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.853223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.853238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.853411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.853427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.853573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.853588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.853816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.853831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.853997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.854012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.854091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.854104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.854241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.854256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.854396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.854412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.854568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.854583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.854804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.854818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.855025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.855039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.855172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.855187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.855322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.855336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.855475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.855491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.855633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.855648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.855819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.855833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.856034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.856048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.856315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.856441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.856455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.856599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.856614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.856765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.856779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.856933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.856948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.857155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.857170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.857385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.857402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.857555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.857570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.857720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.857735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.857939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.857954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.858163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.858178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.771 [2024-07-12 11:44:29.858402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.771 [2024-07-12 11:44:29.858417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.771 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.858616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.858631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.858832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.858847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.859078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.859094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.859246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.859260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.859504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.859519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.859610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.859628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.859770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.859785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.859918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.859934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.860072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.860088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.860242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.860257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.860408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.860425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.860627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.860646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.860743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.860758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.860998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.861013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.861238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.861254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.861451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.861466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.861603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.861619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.861788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.861803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.862019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.862033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.862265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.862280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.862350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.862363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.862560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.862575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.862778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.862792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.862937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.862952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.863125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.863140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.863364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.863385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.863488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.863503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.863671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.863686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.863876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.863891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.864093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.864108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.864266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.864281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.864502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.864518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.864678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.864694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.864919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.864934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.865200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.865214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.865364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.865381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.865530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.865546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.865697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.865711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.865938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.772 [2024-07-12 11:44:29.865953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.772 qpair failed and we were unable to recover it. 00:38:43.772 [2024-07-12 11:44:29.866099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.866113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.866322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.866338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.866563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.866578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.866807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.866822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.866968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.866983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.867217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.867232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.867334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.867351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.867572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.867588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.867725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.867740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.867834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.867848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.868071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.868086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.868248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.868262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.868438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.868453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.868674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.868689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.868916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.868931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.869096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.869111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.869341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.869356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.869596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.869612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.869812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.869827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.870075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.870090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.870191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.870207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.870409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.870424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.870514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.870528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.870683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.870698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.870922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.870936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.871031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.871044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.871264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.871278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.871426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.871442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.871576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.871590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.871676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.871690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.871777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.871791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.871934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.871950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.872100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.872115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.872310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.872341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.872576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.872597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.872817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.872837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.873098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.873118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.873272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.873292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.873527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.873547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.873722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.773 [2024-07-12 11:44:29.873738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.773 qpair failed and we were unable to recover it. 00:38:43.773 [2024-07-12 11:44:29.873967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.873982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.874082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.874096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.874182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.874199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.874353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.874368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.874574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.874588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.874732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.874747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.874883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.874902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.875124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.875140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.875289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.875304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.875549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.875564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.875745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.875760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.876010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.876025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.876255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.876269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.876358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.876371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.876602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.876617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.876790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.876805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.877053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.877067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.877201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.877215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.877416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.877431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.877576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.877591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.877761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.877776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.877976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.877991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.878143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.878158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.878395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.878411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.878578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.878593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.878799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.878814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.878906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.878920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.879169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.879184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.879331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.879347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.879414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.879428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.879627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.879642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.879859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.879874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.880052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.880067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.880294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.880309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.774 [2024-07-12 11:44:29.880509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.774 [2024-07-12 11:44:29.880525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.774 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.880680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.880695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.880894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.880909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.880999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.881012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.881175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.881190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.881414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.881429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.881575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.881591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.881803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.881818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.881970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.881985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.882221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.882237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.882426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.882441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.882582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.882597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.882748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.882763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.883017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.883032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.883164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.883179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.883407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.883422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.883514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.883528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.883735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.883750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.883892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.883906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.884066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.884080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.884290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.884305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.884389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.884404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.884555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.884570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.884718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.884732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.884879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.884893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.885091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.885106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.885341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.885356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.885459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.885473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.885661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.885676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.885762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.885775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.885926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.885941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.886116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.886131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.886304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.886328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.886532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.886549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.886702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.886717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.886932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.886947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.887013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.887027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.887193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.887209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.887293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.887307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.887459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.887477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.887703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.887719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.775 [2024-07-12 11:44:29.887913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.775 [2024-07-12 11:44:29.887928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.775 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.888154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.888169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.888369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.888389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.888485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.888498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.888685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.888700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.888840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.888854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.889112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.889127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.889339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.889354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.889527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.889542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.889694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.889708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.889843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.889858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.889949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.889964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.890057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.890073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.890237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.890252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.890474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.890490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.890630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.890645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.890884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.890899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.891126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.891141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.891387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.891403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.891541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.891556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.891803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.891819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.891999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.892015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.892099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.892112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.892249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.892264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.892495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.892511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.892763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.892778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.892998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.893013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.893215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.893230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.893451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.893466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.893610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.893625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.893841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.893855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.894079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.894094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.894264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.894279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.894478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.894494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.894726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.894742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.894914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.894929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.895097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.895113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.895203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.895217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.895426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.895446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.895645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.895660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.895829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.776 [2024-07-12 11:44:29.895844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.776 qpair failed and we were unable to recover it. 00:38:43.776 [2024-07-12 11:44:29.895991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.896006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.896206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.896221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.896359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.896374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.896557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.896572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.896725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.896740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.896889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.896904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.896914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:43.777 [2024-07-12 11:44:29.897048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.897062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.897165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.897178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.897409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.897424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.897592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.897607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.897834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.897851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.897947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.897961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.898210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.898225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.898431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.898448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.898602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.898622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.898774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.898788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.898925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.898941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.899086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.899102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.899261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.899277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.899479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.899494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.899638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.899653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.899812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.899827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.900055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.900070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.900222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.900236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.900327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.900341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.900509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.900524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.900745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.900760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.900987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.901002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.901169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.901183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.901354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.901369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.901574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.901590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.901794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.901809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.901960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.901976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.902201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.902218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.902304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.902319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.902545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.902562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.902764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.902779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.902952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.902968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.903117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.903133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.903315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.903330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.777 [2024-07-12 11:44:29.903555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.777 [2024-07-12 11:44:29.903572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.777 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.903725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.903741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.903809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.903824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.903972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.903988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.904074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.904089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.904243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.904259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.904418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.904435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.904688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.904705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.904790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.904805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.904981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.904998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.905135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.905153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.905318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.905333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.905551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.905569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.905650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.905664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.905811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.905826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.906057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.906073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.906269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.906285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.906361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.906375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.906516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.906531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.906668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.906685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.906903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.906918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.907150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.907166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.907371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.907395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.907612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.907628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.907801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.907817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.907967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.907982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.908117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.908133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.908210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.908224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.908358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.908374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.908589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.908605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.908768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.908784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.909011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.909027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.909278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.909293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.909467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.909484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.909716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.909733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.909904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.909920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.910055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.910071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.910228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.910244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.910448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.910470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.910698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.910714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.910871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.910887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.778 [2024-07-12 11:44:29.911024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.778 [2024-07-12 11:44:29.911038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.778 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.911219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.911234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.911369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.911388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.911562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.911577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.911777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.911791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.911967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.911982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.912154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.912170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.912400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.912416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.912624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.912640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.912868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.912885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.913159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.913174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.913262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.913276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.913501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.913517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.913680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.913695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.913869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.913884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.914019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.914034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.914167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.914182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.914394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.914410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.914569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.914583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.914804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.914819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.915061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.915076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.915320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.915334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.915501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.915516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.915742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.915757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.915939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.915954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.916091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.916106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.916317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.916331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.916475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.916490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.916736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.916752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.916916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.916931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.917159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.917174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.917318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.917333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.917428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.917443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.917526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.917540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.917691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.779 [2024-07-12 11:44:29.917706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.779 qpair failed and we were unable to recover it. 00:38:43.779 [2024-07-12 11:44:29.917853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.917867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.917948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.917962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.918053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.918066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.918217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.918232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.918368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.918388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.918555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.918570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.918780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.918795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.918943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.918958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.919178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.919193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.919326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.919341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.919588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.919603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.919755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.919771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.919920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.919935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.920180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.920195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.920346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.920364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.920620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.920636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.920833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.920848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.920983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.920998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.921148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.921163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.921376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.921400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.921610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.921625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.921774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.921789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.922006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.922020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.922207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.922222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.922441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.922456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.922691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.922706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.922960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.922988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.923204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.923219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.923366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.923387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.923593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.923607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.923830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.923844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.923998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.924013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.924150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.924165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.924336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.924351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.924553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.924567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.924776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.924791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.924991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.925006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.925142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.925157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.925315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.925330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.925428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.925442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.925608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.780 [2024-07-12 11:44:29.925622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.780 qpair failed and we were unable to recover it. 00:38:43.780 [2024-07-12 11:44:29.925813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.925846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.925962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.925993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.926211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.926236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.926384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.926401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.926504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.926519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.926734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.926748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.926899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.926914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.927047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.927061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.927261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.927276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.927517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.927533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.927696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.927710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.927908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.927922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.928016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.928030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.928237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.928254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.928325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.928338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.928501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.928516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.928769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.928784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.928874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.928887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.929042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.929057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.929159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.929174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.929307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.929321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.929548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.929563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.929703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.929718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.929806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.929819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.930042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.930056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.930287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.930302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.930454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.930470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.930649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.930664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.930811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.930825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.931080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.931095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.931348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.931363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.931519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.931534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.931754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.931768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.931999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.932014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.932084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.932097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.932294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.932309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.932458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.932477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.932630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.932644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.932845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.932861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.933011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.933027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.781 qpair failed and we were unable to recover it. 00:38:43.781 [2024-07-12 11:44:29.933277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.781 [2024-07-12 11:44:29.933300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.933475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.933501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.933682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.933704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.933938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.933954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.934099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.934114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.934340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.934355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.934601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.934616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.934846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.934861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.935086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.935101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.935251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.935266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.935486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.935501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.935711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.935726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.935878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.935893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.936109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.936126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.936309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.936326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.936553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.936574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.936734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.936749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.936932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.936947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.937177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.937192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.937357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.937372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.937521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.937537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.937749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.937765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.937957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.937973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.938147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.938162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.938362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.938384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.938588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.938603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.938824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.938839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.939044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.939059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.939158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.939172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.939275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.939290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.939385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.939399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.939598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.939612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.939783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.939798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.939888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.939902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.940076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.940091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.940304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.940319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.940512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.940528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.940733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.940748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.940946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.940961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.941184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.941199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.941297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.941320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.782 qpair failed and we were unable to recover it. 00:38:43.782 [2024-07-12 11:44:29.941487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.782 [2024-07-12 11:44:29.941513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.941697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.941720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.941931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.941948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.942190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.942205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.942466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.942481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.942705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.942720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.942870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.942885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.943039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.943054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.943277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.943292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.943433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.943448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.943609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.943624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.943771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.943786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.943988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.944005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.944138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.944153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.944307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.944322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.944514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.944529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.944678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.944693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.944919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.944934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.945147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.945162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.945310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.945325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.945474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.945490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.945572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.945586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.945751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.945766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.945990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.946005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.946212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.946228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.946435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.946451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.946598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.946613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.946773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.946788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.946988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.947003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.947151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.947166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.947311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.947327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.947486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.947501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.947732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.947747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.947884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.947899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.948050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.783 [2024-07-12 11:44:29.948065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.783 qpair failed and we were unable to recover it. 00:38:43.783 [2024-07-12 11:44:29.948264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.948279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.948422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.948437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.948523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.948536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.948784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.948799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.948967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.948988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.949085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.949109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.949352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.949373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.949482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.949499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.949737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.949752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.949837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.949855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.950122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.950137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.950290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.950305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.950390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.950404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.950609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.950623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.950718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.950731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.950936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.950950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.951041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.951054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.951303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.951319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.951497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.951513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.951652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.951667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.951897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.951913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.952147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.952162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.952386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.952402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.952491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.952505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.952660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.952675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.952812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.952827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.953000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.953014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.953274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.953289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.953516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.953532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.953694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.953710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.953934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.953950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.954087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.954102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.954358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.954373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.954614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.954630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.954876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.954891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.955117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.955132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.955279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.955294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.955503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.955518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.955674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.955689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.955865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.955880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.956103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.784 [2024-07-12 11:44:29.956119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.784 qpair failed and we were unable to recover it. 00:38:43.784 [2024-07-12 11:44:29.956200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.956214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.956298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.956312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.956416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.956430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.956605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.956897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.956922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.957231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.957253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.957526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.957542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.957745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.957760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.957957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.957972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.958197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.958212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.958438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.958452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.958623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.958638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.958712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.958726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.958871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.958884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.959051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.959066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.959322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.959337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.959537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.959555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.959810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.959825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.959963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.959981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.960207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.960222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.960450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.960465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.960639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.960653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.960805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.960820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.961018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.961033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.961181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.961196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.961397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.961413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.961691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.961706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.961879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.961894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.962058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.962074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.962265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.962280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.962419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.962434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.962682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.962697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.962870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.962885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.963082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.963098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.963339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.963356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.963582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.963603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.963752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.963766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.963918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.963932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.964112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.964127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.964276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.964291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.785 [2024-07-12 11:44:29.964358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.785 [2024-07-12 11:44:29.964371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.785 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.964525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.964540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.964714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.964729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.964910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.964933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.965110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.965135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.965406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.965428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.965708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.965725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.965882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.965897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.966135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.966150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.966371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.966391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.966547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.966562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.966713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.966728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.966820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.966834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.966969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.966984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.967216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.967231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.967387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.967402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.967538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.967555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.967720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.967735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.967957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.967972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.968144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.968159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.968386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.968401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.968557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.968573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.968662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.968676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.968841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.968948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.968961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.969107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.969121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.969274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.969289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.969376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.969395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.969597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.969612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.969705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.969718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.969927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.969941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.970146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.970161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.970396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.970411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.970681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.970696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.970861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.970876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.971072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.971088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.971370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.971388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.971480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.971494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.971640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.971655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.971807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.971821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.972045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.786 [2024-07-12 11:44:29.972060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.786 qpair failed and we were unable to recover it. 00:38:43.786 [2024-07-12 11:44:29.972287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.972302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.972391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.972404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.972563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.972580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.972749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.972763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.972915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.972930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.973022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.973035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.973132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.973146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.973348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.973364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.973613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.973629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.973800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.973815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.974037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.974052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.974288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.974303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.974549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.974565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.974657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.974671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.974913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.974928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.975155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.975170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.975425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.975441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.975639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.975654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.975826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.975841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.975993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.976008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.976232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.976247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.976397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.976416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.976651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.976666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.976761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.976775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.977006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.977021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.977100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.977113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.977266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.977281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.977426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.977441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.977614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.977629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.977783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.977798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.977966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.977980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.978187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.978202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.978283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.978296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.978472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.978487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.787 qpair failed and we were unable to recover it. 00:38:43.787 [2024-07-12 11:44:29.978687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.787 [2024-07-12 11:44:29.978701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.978788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.978801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.978881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.978895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.979044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.979059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.979307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.979322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.979613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.979628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.979697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.979710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.979936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.979951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.980151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.980169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.980302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.980317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.980524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.980539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.980741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.980756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.980969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.980984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.981284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.981298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.981524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.981547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.981770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.981785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.981945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.981960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.982179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.982193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.982344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.982359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.982557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.982573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.982714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.982729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.982959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.982974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.983178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.983192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.983394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.983409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.983552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.983567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.983794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.983809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.983954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.983969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.984105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.984120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.984287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.984303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.984502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.984517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.984771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.984786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.984871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.984884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.985132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.985148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.985283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.985297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.985520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.985538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.985759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.985774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.985976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.985990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.986152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.986166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.986409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.986425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.986652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.986666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.788 [2024-07-12 11:44:29.986815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.788 [2024-07-12 11:44:29.986830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.788 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.987008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.987024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.987248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.987262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.987408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.987424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.987634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.987650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.987904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.987918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.988165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.988181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.988333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.988347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.988548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.988565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.988726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.988740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.988932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.988949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.989117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.989137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.989284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.989300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.989446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.989461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.989692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.989707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.989857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.989872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.990022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.990037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.990199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.990214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.990477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.990493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.990643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.990657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.990858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.990872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.991022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.991037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.991204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.991219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.991300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.991314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.991539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.991554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.991774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.991789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.991967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.991983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.992218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.992233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.992399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.992413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.992689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.992704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.992853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.992868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.993091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.993106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.993199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.993213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.993461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.993476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.993636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.993651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.993861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.993876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.994034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.994049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.994152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.994168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.994395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.994410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.994573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.994588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.994812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.789 [2024-07-12 11:44:29.994828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.789 qpair failed and we were unable to recover it. 00:38:43.789 [2024-07-12 11:44:29.995073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.995088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.995245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.995260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.995467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.995482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.995627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.995643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.995842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.995857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.996063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.996079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.996218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.996233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.996431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.996449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.996543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.996557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.996691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.996706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.996928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.996943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.997079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.997095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.997172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.997185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.997320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.997335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.997488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.997503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.997749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.997764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.997901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.997916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.998140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.998156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.998376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.998405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.998503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.998518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.998758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.998772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.998925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.998940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.999140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.999154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.999306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.999321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.999549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.999565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.999710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.999724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.999818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.999832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:29.999982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:29.999997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.000202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.000218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.000440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.000456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.000682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.000699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.000887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.000902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.001103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.001118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.001254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.001272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.001483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.001503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.001608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.001624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.001760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.001775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.001927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.001943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.002112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.002126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.002266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.790 [2024-07-12 11:44:30.002281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.790 qpair failed and we were unable to recover it. 00:38:43.790 [2024-07-12 11:44:30.002467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.002483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.002572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.002585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.002665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.002678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.002815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.002830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.002961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.002976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.003115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.003130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.003283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.003299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.003514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.003533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.003700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.003716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.003826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.003841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.003934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.003948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.004063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.004226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.004384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.004481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.004597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.004750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.004910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.004991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.005004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.005201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.005216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.005420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.005436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.005611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.005626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.005707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.005721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.005870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.005886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.005976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.005989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.006147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.006162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.006315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.006330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.006412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.006426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.006661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.006676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.006894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.006909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.007077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.007092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.007311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.007326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.007477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.007492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.007639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.007654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.007802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.007818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.008087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.008101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.008315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.008330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.008517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.008532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.008618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.008631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.008834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.008849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.791 qpair failed and we were unable to recover it. 00:38:43.791 [2024-07-12 11:44:30.009000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.791 [2024-07-12 11:44:30.009015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.009248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.009262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.009342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.009354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.009509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.009521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.009667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.009679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.009900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.009913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.010010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.010022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.010228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.010244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.010462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.010477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.010577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.010590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.010674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.010687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.010878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.010890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.011048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.011060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.011232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.011245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.011396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.011409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.011555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.011567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.011704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.011716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.011822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.011834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.012061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.012084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.012247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.012260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.012343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.012357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.012452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.012465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.012622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.012635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.012788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.012802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.012950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.012965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.013115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.013128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.013280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.013294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.013437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.013451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.013612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.013625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.013835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.013851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.014060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.014075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.014278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.014292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.014383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.014398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.014617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.014632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.014792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.014807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.014899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.014913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.015143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.015159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.015319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.015335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.015475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.015491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.015642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.015657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.792 [2024-07-12 11:44:30.015885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.792 [2024-07-12 11:44:30.015902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.792 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.016049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.016067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.016135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.016149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.016272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.016316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.016400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.016416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.016574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.016590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.016821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.016844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.017071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.017097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.017206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.017228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.017404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.017426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.017656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.017699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.017827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.017891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.018107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.018167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.018389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.018469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.018673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.018729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.018905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.018985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.019283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.019329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.019603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.019690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.019850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.019924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.020126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.020167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.020408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.020475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.020806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.020865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.021961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.021975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.022070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.022086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.022230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.022245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.022387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.022406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.022581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.022598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.022746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.022762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.022906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.022922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.022989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.023058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.023134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.023280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.023394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.023491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.023608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.793 qpair failed and we were unable to recover it. 00:38:43.793 [2024-07-12 11:44:30.023798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.793 [2024-07-12 11:44:30.023812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.023885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.023901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.023977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.023992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.024142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.024160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.024239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.024253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.024407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.024425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.024507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.024523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.024613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.024627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.024776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.024790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.024873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.024887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.025945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.025960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.026060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.026074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.026222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.026235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.026374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.026393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.026523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.026538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.026678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.026693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.026784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.026798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.027947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.027961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.028895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.028914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.029029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.029044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.029213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.794 [2024-07-12 11:44:30.029227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.794 qpair failed and we were unable to recover it. 00:38:43.794 [2024-07-12 11:44:30.029362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.029376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.029471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.029485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.029560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.029573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.029719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.029733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.029960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.029975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.030132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.030148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.030340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.030355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.030590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.030605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.030752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.030768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.030902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.030917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.031063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.031078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.031170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.031184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.031257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.031271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.031406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.031421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.031511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.031525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.031750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.031766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.031864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.031901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.032104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.032119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.032325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.032339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.032483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.032499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.032646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.032661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.032847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.032862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.032947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.032962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.033187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.033202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.033287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.033302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.033443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.033458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.033685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.033700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.033862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.033877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.033948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.033962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.034123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.034139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.034284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.034299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.034467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.034482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.034698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.034714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.034890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.034905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.034998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.035013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.035164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.035178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.035421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.035437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.035587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.035605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.035727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.035742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.035830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.035844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.035926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.035941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.795 qpair failed and we were unable to recover it. 00:38:43.795 [2024-07-12 11:44:30.036078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.795 [2024-07-12 11:44:30.036093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.036178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.036193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.036393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.036409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.036630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.036645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.036862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.036877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.037090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.037105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.037178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.037193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.037388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.037404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.037594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.037609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.037789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.037803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.037951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.037966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.038048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.038063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.038207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.038222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.038408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.038424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.038605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.038619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.038776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.038791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.038981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.038995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.039220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.039234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.039422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.039438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.039608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.039623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.039799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.039814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.040068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.040083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.040254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.040268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.040449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.040487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.040772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.040983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.041013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.041121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.041143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.041235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.041255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.041467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.041490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.041597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.041617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.041761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.041781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.041935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.041956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.042109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.796 [2024-07-12 11:44:30.042129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.796 qpair failed and we were unable to recover it. 00:38:43.796 [2024-07-12 11:44:30.042373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.042398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.042641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.042675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.042826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.042843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.043959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.043972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.044069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.044084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.044169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.044183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.044387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.044402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.044549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.044564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.044767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.044781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.045954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.045969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.046063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.046078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.046210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.046225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.046450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.046466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.046618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.046632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.046723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.046739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.046820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.046842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.047962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.047978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.048181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.048197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.797 [2024-07-12 11:44:30.048292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.797 [2024-07-12 11:44:30.048307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.797 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.048387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.048400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.048558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.048572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.048721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.048737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.048882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.048897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.049969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.049984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.050139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.050154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.050293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.050307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.050444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.050459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.050549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.050564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.050710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.050725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.050892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.050907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.050998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.051013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.051212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.051228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.051366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.051387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.051473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.051487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.051624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.051638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.051802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.051817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.051897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.051912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.051997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.052079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.052160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.052318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.052445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.052619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.052810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.052912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.052933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.053118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.053137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.053228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.053247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.053341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.053360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.053447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.053464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.053535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.053549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.798 qpair failed and we were unable to recover it. 00:38:43.798 [2024-07-12 11:44:30.053756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.798 [2024-07-12 11:44:30.053772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.053911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.053926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.054076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.054091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.054162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.054178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.054317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.054332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.054407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.054420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.054489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.054503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.054571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.054584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.054789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.054804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.055961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.055982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.056133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.056147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.056280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.056296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.056444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.056461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.056609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.056625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.056710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.056724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.056898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.056914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.057094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.057109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.057204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.057220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.057396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.057411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.057493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.057507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.057645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.057660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.057743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.057767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.057923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.057938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.058163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.058178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.058336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.058351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.058439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.058455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.058588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.058603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.058685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.058700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.058838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.058853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.058992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.059007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.059141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.059157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.059227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.799 [2024-07-12 11:44:30.059241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.799 qpair failed and we were unable to recover it. 00:38:43.799 [2024-07-12 11:44:30.059373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.059392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.059458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.059471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.059620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.059635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.059727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.059742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.059828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.059841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.060840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.060855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.061029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.061044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.061198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.061212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.061415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.061431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.061570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.061586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.061676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.061691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.061792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.061807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.061953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.061968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.062968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.062983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.063054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.063069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.063151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.063168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.063241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.063256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.063461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.063476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.063560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.800 [2024-07-12 11:44:30.063575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.800 qpair failed and we were unable to recover it. 00:38:43.800 [2024-07-12 11:44:30.063654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.063669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.063867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.063882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.063959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.063974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.064129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.064144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.064320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.064336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.064406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.064420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.064511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.064525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.064610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.064624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.064698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.064717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.064825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.064839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.065958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.065973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.066057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.066071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.066138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.066151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.066288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.066303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.066443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.066459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.066529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.066544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.066788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.066803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.066963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.066978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.067136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.067151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.067311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.067326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.067395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.067409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.067503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.067517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.067770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.067784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.067879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.067893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.801 qpair failed and we were unable to recover it. 00:38:43.801 [2024-07-12 11:44:30.068890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.801 [2024-07-12 11:44:30.068903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.069911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.069926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.070072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.070087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.070153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.070166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.070253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.070267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.070490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.070505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.070644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.070659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.070887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.070902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.070988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.071085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.071241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.071410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.071514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.071674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.071783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.071937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.071951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.072957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.072972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.802 [2024-07-12 11:44:30.073847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.802 qpair failed and we were unable to recover it. 00:38:43.802 [2024-07-12 11:44:30.073988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.074072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.074288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.074384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.074478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.074671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.074768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.074920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.075964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.075978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.076971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.076985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.077965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.077980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.078198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.078212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.078349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.078366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.078506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.078521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.078588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.078602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.078670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.078683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.078837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.078852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.078938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.803 [2024-07-12 11:44:30.078953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.803 qpair failed and we were unable to recover it. 00:38:43.803 [2024-07-12 11:44:30.079046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.079061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.079207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.079223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.079305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.079320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.079460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.079476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.079546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.079561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.079647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.079662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.079888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.079903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.079997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.080966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.080981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.081209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.081224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.081361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.081376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.081528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.081543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.081771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.081785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.081885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.081900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.081977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.081991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.082946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.082960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.083027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.804 [2024-07-12 11:44:30.083041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.804 qpair failed and we were unable to recover it. 00:38:43.804 [2024-07-12 11:44:30.083114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.083214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.083316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.083496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.083599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.083685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.083856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.083951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.083966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.084103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.084119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.084186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.084201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:43.805 [2024-07-12 11:44:30.084344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:43.805 [2024-07-12 11:44:30.084359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:43.805 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.084461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.084477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.084581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.084597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.084680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.084695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.084849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.084865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.084954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.084970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.085912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.085927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.086040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.086055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.086144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.086159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.086251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.086267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.086353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.086368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.086569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.086588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.086660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.086674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.086829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.086845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.087003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.087020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.087163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.087179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.087324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.087348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.087548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.087599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.087772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.087798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.088007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.088056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.088269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.088349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.088495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.100 [2024-07-12 11:44:30.088513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.100 qpair failed and we were unable to recover it. 00:38:44.100 [2024-07-12 11:44:30.088591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.088609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.088686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.088702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.088783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.088798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.088901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.088916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.088994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.089917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.089932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.090915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.090996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.091976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.091990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.101 qpair failed and we were unable to recover it. 00:38:44.101 [2024-07-12 11:44:30.092088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.101 [2024-07-12 11:44:30.092103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.092244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.092259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.092345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.092360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.092442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.092457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.092561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.092576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.092669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.092684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.092803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.092817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.093982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.093997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.094083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.094098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.094212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.094226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.094375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.094396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.094559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.094574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.094657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.094671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.094811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.094827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.094981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.094997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.095199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.095214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.095302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.095317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.095479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.095494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.095625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.095640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.095785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.095799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.096003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.096018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.096101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.096116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.096261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.096276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.096429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.096445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.096672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.096687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.096886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.102 [2024-07-12 11:44:30.096902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.102 qpair failed and we were unable to recover it. 00:38:44.102 [2024-07-12 11:44:30.097103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.097119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.097327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.097342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.097497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.097512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.097595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.097610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.097745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.097759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.097898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.097912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.098103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.098119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.098279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.098299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.098434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.098449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.098594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.098610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.098741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.098756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.098962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.098976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.099216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.099230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.099312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.099326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.099576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.099591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.099755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.099769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.100008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.100022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.100170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.100184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.100262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.100276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.100430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.100447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.100603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.100618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.100771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.100785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.100917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.100932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.101123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.101138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.101351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.101365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.101665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.101703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.101947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.101985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.102129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.102164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.103 [2024-07-12 11:44:30.102336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.103 [2024-07-12 11:44:30.102358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.103 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.102543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.102560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.102703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.102718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.102806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.102820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.102985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.102999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.103145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.103160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.103258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.103273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.103518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.103534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.103750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.103766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.103967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.103982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.104190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.104205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.104385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.104400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.104550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.104565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.104701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.104716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.104941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.104956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.105199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.105214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.105387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.105402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.105630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.105645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.105880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.105904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.106167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.106188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.106336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.106362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.106485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.106506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.106742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.106762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.106970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.106990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.107081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.107101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.107366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.107392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.107646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.107667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.107779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.107799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.108038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.108059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.108214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.108234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.108496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.108518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.104 [2024-07-12 11:44:30.108612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.104 [2024-07-12 11:44:30.108637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.104 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.108732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.108752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.108931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.108953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.109169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.109196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.109435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.109455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.109613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.109633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.109823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.109842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.110086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.110107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.110334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.110355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.110594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.110615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.110771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.110791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.110886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.110905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.111012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.111031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.111272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.111291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.111388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.111409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.111673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.111693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.111862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.111882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.111980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.111999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.112212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.112233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.112395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.112416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.112565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.112585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.112790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.112809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.113050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.113070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.105 qpair failed and we were unable to recover it. 00:38:44.105 [2024-07-12 11:44:30.113322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.105 [2024-07-12 11:44:30.113342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.113564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.113585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.113748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.113769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.114023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.114043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.114313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.114331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.114540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.114555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.114787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.114802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.114903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.114918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.115023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.115038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.115199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.115214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.115395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.115411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.115501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.115516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.115746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.115761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.115962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.115977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.116179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.116198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.116352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.116368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.116537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.116551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.116693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.116710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.116942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.116957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.117049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.117064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.117208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.117223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.117307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.117322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.117522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.117538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.117756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.117771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.118043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.118058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.118229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.118243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.118419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.118434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.118578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.118593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.118814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.118829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.119070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.119085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.119289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.119304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.119522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.106 [2024-07-12 11:44:30.119538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.106 qpair failed and we were unable to recover it. 00:38:44.106 [2024-07-12 11:44:30.119688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.119704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.119915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.119935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.120094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.120109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.120208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.120224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.120425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.120441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.120579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.120594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.120847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.120863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.121009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.121024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.121244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.121259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.121476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.121492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.121750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.121765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.121923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.121938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.122052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.122076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.122252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.122273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.122441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.122462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.122706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.122726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.122963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.122983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.123246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.123266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.123427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.123448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.123695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.123714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.123935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.123955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.124138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.124158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.124400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.124420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.124650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.124670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.124912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.124932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.125029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.125052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.125146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.125166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.125385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.125406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.125510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.125530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.125768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.125788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.125994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.126014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.126204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.126223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.107 qpair failed and we were unable to recover it. 00:38:44.107 [2024-07-12 11:44:30.126387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.107 [2024-07-12 11:44:30.126407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.126558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.126577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.126758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.126778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.126939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.126959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.127184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.127203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.127438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.127459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.127559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.127579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.127799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.127819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.128032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.128052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.128292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.128312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.128481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.128502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.128691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.128711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.128949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.128969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.129134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.129154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.129341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.129360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.129550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.129574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.129728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.129744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.129924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.129939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.130148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.130163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.130319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.130334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.130560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.130583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.130826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.130847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.131108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.131128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.131285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.131304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.131512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.131533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.131752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.131773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.131956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.131976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.132088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.132108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.132220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.132240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.132418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.132439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.108 [2024-07-12 11:44:30.132540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.108 [2024-07-12 11:44:30.132560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.108 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.132715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.132735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.132972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.132992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.133085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.133109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.133370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.133396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.133646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.133666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.133891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.133911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.134095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.134115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.134294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.134315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.134423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:44.109 [2024-07-12 11:44:30.134459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:44.109 [2024-07-12 11:44:30.134472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:44.109 [2024-07-12 11:44:30.134472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.134484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:44.109 [2024-07-12 11:44:30.134492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 [2024-07-12 11:44:30.134496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.134606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.134624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.134664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:38:44.109 [2024-07-12 11:44:30.134720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.134740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.134745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:38:44.109 [2024-07-12 11:44:30.134811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:38:44.109 [2024-07-12 11:44:30.134836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:38:44.109 [2024-07-12 11:44:30.134954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.134974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.135145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.135165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.135316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.135335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.135541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.135562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.135668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.135687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.135927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.135948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.136151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.136170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.136388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.136409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.136582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.136602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.136801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.136821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.137039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.137060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.137299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.137319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.137495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.137516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.137682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.137704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.137872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.137892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.138179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.138201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.109 qpair failed and we were unable to recover it. 00:38:44.109 [2024-07-12 11:44:30.138460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.109 [2024-07-12 11:44:30.138481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.138653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.138673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.138824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.138843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.139018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.139034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.139278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.139295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.139405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.139420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.139685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.139701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.139857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.139872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.139969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.139984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.140127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.140142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.140343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.140357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.140566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.140581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.140665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.140682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.140780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.140796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.141061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.141077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.141261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.141276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.141362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.141385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.141524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.141540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.141638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.141653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.141829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.141845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.141999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.142014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.142202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.142217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.142298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.142314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.142479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.142495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.142674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.142690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.142771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.142788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.143010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.143028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.143249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.143267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.143414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.143432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.143661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.143678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.110 [2024-07-12 11:44:30.143759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.110 [2024-07-12 11:44:30.143774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.110 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.143931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.143948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.144158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.144175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.144338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.144355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.144532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.144549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.144740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.144756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.144846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.144861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.145095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.145111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.145215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.145230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.145427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.145444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.145650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.145668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.145823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.145841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.146040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.146059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.146194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.146211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.146393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.146411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.146508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.146524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.146615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.146631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.146718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.146734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.146813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.146828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.147098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.147115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.147363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.147386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.147589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.147705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.147724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.147928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.111 [2024-07-12 11:44:30.147944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.111 qpair failed and we were unable to recover it. 00:38:44.111 [2024-07-12 11:44:30.148101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.148116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.148296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.148311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.148462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.148478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.148648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.148663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.148869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.148884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.149151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.149167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.149318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.149334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.149482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.149499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.149667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.149682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.149795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.149811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.150012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.150028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.150191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.150207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.150357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.150374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.150524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.150540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.150689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.150705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.150794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.150809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.150974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.150989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.151218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.151233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.151369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.151391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.151550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.151565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.151702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.151717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.151807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.151822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.151903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.151918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.152151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.152166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.152396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.152412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.152559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.152574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.152731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.152747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.152923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.152938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.153094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.153110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.153315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.153330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.153514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.112 [2024-07-12 11:44:30.153530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.112 qpair failed and we were unable to recover it. 00:38:44.112 [2024-07-12 11:44:30.153697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.153713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.153876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.153892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.154060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.154074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.154221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.154236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.154416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.154432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.154534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.154549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.154639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.154654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.154749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.154767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.154913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.154928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.155031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.155046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.155150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.155171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.155306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.155322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.155505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.155521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.155673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.155689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.155927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.155943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.156186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.156202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.156372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.156399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.156570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.156585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.156765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.156781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.156899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.156915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.157010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.157026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.157167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.157183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.157327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.157342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.157482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.157498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.157719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.157735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.157818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.157834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.158070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.158085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.158336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.158352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.158504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.158521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.158696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.113 [2024-07-12 11:44:30.158712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.113 qpair failed and we were unable to recover it. 00:38:44.113 [2024-07-12 11:44:30.158874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.158894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.159094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.159109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.159261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.159275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.159502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.159518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.159642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.159674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.159976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.160009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.160170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.160192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.160372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.160400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.160565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.160587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.160803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.160824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.161016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.161037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.161279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.161300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.161468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.161485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.161581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.161597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.161816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.161831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.161981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.161996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.162206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.162221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.162388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.162406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.162520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.162536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.162623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.162638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.162733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.162748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.162899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.162915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.163119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.163135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.163225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.163240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.163411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.163427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.163628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.163643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.163867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.163883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.163964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.163980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.164133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.164148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.164238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.164254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.164333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.114 [2024-07-12 11:44:30.164348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.114 qpair failed and we were unable to recover it. 00:38:44.114 [2024-07-12 11:44:30.164556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.164572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.164712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.164728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.164879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.164894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.165070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.165084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.165322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.165337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.165493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.165509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.165613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.165630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.165844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.165859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.165966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.165982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.166176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.166191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.166394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.166410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.166610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.166626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.166797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.166812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.167045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.167070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.167324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.167345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.167526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.167547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.167674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.167695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.167908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.167930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.168029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.168050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.168290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.168310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.168536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.168557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.168803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.168820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.169025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.169040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.169266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.169281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.169492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.169508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.169688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.169704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.169798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.169815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.169968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.169983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.170075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.170090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.170171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.170187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.115 qpair failed and we were unable to recover it. 00:38:44.115 [2024-07-12 11:44:30.170357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.115 [2024-07-12 11:44:30.170372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.170466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.170482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.170631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.170646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.170742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.170758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.170895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.170910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.171060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.171076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.171234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.171249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.171344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.171359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.171534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.171550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.171708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.171723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.171953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.171969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.172193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.172208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.172281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.172296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.172454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.172470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.172697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.172713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.172891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.172907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.173071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.173087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.173294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.173309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.173498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.173515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.173652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.173673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.173828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.173844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.173995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.174011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.174263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.174278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.174490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.174515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.174754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.174780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.174956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.174983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.175213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.175234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.175345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.175365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.175568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.175589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.175682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.116 [2024-07-12 11:44:30.175701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.116 qpair failed and we were unable to recover it. 00:38:44.116 [2024-07-12 11:44:30.175890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.175910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.176149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.176165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.176369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.176394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.176621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.176637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.176745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.176760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.176833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.176849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.177921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.177937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.178014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.178029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.178119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.178134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.178234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.178249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.178427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.178442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.178581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.178597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.178690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.178705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.178911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.178926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.179151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.179166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.179253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.179268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.179405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.179421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.179576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.117 [2024-07-12 11:44:30.179592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.117 qpair failed and we were unable to recover it. 00:38:44.117 [2024-07-12 11:44:30.179728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.179742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.179826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.179841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.179956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.179971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.180117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.180132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.180278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.180293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.180434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.180449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.180700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.180715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.180853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.180868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.180960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.180982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.181155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.181189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.181292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.181313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.181545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.181566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.181663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.181684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.181837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.181856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.181948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.181967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.182118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.182139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.182293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.182309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.182451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.182466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.182623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.182638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.182791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.182807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.182876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.182892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.183027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.183044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.183145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.183160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.183323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.183338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.183411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.183443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.183596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.183816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.183831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.183967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.183982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.184128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.184143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.184214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.184229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.184395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.184412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.184485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.184515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.118 qpair failed and we were unable to recover it. 00:38:44.118 [2024-07-12 11:44:30.184587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.118 [2024-07-12 11:44:30.184601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.184678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.184693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.184763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.184778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.184858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.184873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.184973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.184989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.185071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.185157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.185309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.185406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.185511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.185732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.185910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.185992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.186025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.186103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.186118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.186213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.186229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.186399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.186414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.186509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.186533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.186649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.186672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.186841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.186861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.187025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.187045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.187205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.187226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.187322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.187342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.187443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.187464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.187653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.187674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.187772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.187789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.187879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.187894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.188048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.188063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.188207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.188223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.188316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.188332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.188484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.188500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.188695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.119 [2024-07-12 11:44:30.188711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.119 qpair failed and we were unable to recover it. 00:38:44.119 [2024-07-12 11:44:30.188859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.188874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.189123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.189138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.189240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.189254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.189484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.189501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.189646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.189674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.189890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.189905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.190045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.190060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.190139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.190154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.190242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.190257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.190446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.190461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.190620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.190635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.190845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.190861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.190932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.190948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.191083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.191099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.191198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.191214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.191419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.191435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.191616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.191631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.191714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.191729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.191870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.191886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.192093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.192108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.192197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.192213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.192358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.192372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.192522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.192537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.192626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.192640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.192720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.192735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.192887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.192903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.193068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.193082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.193256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.193271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.193357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.193373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.193528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.193543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.193637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.193653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.193762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.193778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.120 [2024-07-12 11:44:30.193867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.120 [2024-07-12 11:44:30.193881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.120 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.193986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.194134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.194303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.194394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.194479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.194633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.194714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.194908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.194923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.195973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.195987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.196971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.196986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.197083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.197098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.197167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.197183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.197279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.197294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.197428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.197443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.121 [2024-07-12 11:44:30.197581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.121 [2024-07-12 11:44:30.197595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.121 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.197679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.197694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.197765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.197782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.197874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.197888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.198037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.198127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.198208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.198421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.198586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.198699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.198820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.198986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.199074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.199161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.199238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.199453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.199640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.199798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.199946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.199961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.200111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.200127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.200202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.200216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.200396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.200411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.200587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.122 [2024-07-12 11:44:30.200602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.122 qpair failed and we were unable to recover it. 00:38:44.122 [2024-07-12 11:44:30.200686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.200701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.200853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.200867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.201850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.201865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.202024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.202039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.202131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.202147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.202290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.202305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.202506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.202522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.202751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.202766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.202855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.202871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.203978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.203992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.204148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.204163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.204309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.204323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.204422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.204436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.204513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.204527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.204622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.204636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.204773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.204788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.204962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.204977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.205046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.123 [2024-07-12 11:44:30.205067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.123 qpair failed and we were unable to recover it. 00:38:44.123 [2024-07-12 11:44:30.205136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.205151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.205286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.205301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.205401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.205416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.205483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.205497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.205742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.205758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.205842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.205857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.205926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.205941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.206921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.206998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.207013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.207094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.207109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.207330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.207347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.207425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.207440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.207582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.207597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.207822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.207838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.207907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.207922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.207998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.208014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.208096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.208110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.208248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.208263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.208348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.208362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.208552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.208582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.208690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.208716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.208887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.208907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.209064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.124 [2024-07-12 11:44:30.209083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.124 qpair failed and we were unable to recover it. 00:38:44.124 [2024-07-12 11:44:30.209159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.209178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.209340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.209360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.209466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.209490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.209577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.209598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.209688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.209708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.209795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.209816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.209992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.210011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.210159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.210179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.210265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.210285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.210389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.210410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.210501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.210521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.210705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.210727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.210889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.210909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.211076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.211185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.211362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.211476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.211660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.211746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.211907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.211995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.212094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.212196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.212342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.212520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.212670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.212821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.212983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.212999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.213072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.213087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.213171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.213185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.213334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.213350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.213492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.213507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.125 [2024-07-12 11:44:30.213641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.125 [2024-07-12 11:44:30.213656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.125 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.213805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.213819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.214048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.214063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.214233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.214248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.214340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.214355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.214543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.214561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.214714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.214728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.214902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.214918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.215884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.215898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.216965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.216980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.217053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.217067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.217219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.217234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.217313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.217329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.217467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.217650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.217666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.217801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.217816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.218089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.218105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.218271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.218287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.218369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.126 [2024-07-12 11:44:30.218388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.126 qpair failed and we were unable to recover it. 00:38:44.126 [2024-07-12 11:44:30.218507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.218521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.218766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.218782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.218935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.218950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.219086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.219101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.219193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.219208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.219441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.219457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.219536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.219551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.219752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.219769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.219912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.219928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.220073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.220088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.220228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.220245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.220334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.220349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.220544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.220562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.220706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.220722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.220812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.220827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.221079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.221095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.221328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.221343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.221503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.221520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.221683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.221699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.221911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.221927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.222085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.222100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.222172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.222188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.222422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.222439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.222522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.222538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.222683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.222698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.222912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.222927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.223146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.223161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.223308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.223323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.223412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.223429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.223686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.223702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.223789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.223804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.127 [2024-07-12 11:44:30.223888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.127 [2024-07-12 11:44:30.223904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.127 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.224061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.224077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.224228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.224244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.224475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.224491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.224586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.224601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.224743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.224758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.224847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.224862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.225021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.225037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.225262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.225277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.225413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.225429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.225570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.225585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.225768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.225783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.225941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.225957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.226097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.226112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.226213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.226229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.226403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.226431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.226598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.226615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.226816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.226831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.227038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.227054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.227129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.227145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.128 [2024-07-12 11:44:30.227358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.128 [2024-07-12 11:44:30.227374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.128 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.227525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.227548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.227625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.227640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.227846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.227862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.228019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.228036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.228263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.228279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.228502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.228519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.228676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.228693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.228848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.228864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.229973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.229989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.230192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.230208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.230301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.230316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.230484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.230502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.230641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.230658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.230892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.230909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.230995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.231011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.231265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.231283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.231442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.231460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.231615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.231633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.231789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.231807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.231964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.231981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.232189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.232227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.232423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.232454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.232632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.232660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.232827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.129 [2024-07-12 11:44:30.232848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.129 qpair failed and we were unable to recover it. 00:38:44.129 [2024-07-12 11:44:30.233090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.233111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.233297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.233318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.233495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.233516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.233761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.233782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.233887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.233907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.234123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.234144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.234331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.234353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.234550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.234726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.234745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.234852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.234878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.235044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.235064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.235148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.235169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.235351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.235370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.235539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.235559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.235681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.235703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.235868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.235888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.236091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.236111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.236269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.236291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.236395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.236418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.236604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.236623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.236839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.236860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.237108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.237130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.237226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.237246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.237425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.237447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.237658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.237679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.237784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.237804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.237960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.237981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.238147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.238167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.238329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.238350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.238504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.238525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.238648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.238669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.238783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.238803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.130 [2024-07-12 11:44:30.238892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.130 [2024-07-12 11:44:30.238912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.130 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.239011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.239032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.239249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.239269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.239368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.239393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.239617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.239640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.239718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.239733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.239873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.239887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.240047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.240063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.240286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.240302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.240455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.240471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.240624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.240640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.240726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.240742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.240899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.240915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.241159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.241175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.241312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.241328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.241480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.241495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.241654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.241669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.241827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.241846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.242052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.242068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.242212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.242227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.242430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.242447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.242686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.242702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.242797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.242813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.242956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.242971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.243172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.243188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.243441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.243456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.243696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.243711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.243934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.243950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.244112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.244127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.244309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.244325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.244555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.131 [2024-07-12 11:44:30.244571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.131 qpair failed and we were unable to recover it. 00:38:44.131 [2024-07-12 11:44:30.244783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.244799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.245020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.245035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.245238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.245253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.245502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.245518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.245662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.245677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.245781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.245796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.245941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.245957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.246157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.246172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.246324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.246339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.246588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.246605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.246755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.246789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.247029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.247045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.247138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.247154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.247405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.247422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.247504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.247519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.247724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.247739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.247882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.247898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.247986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.248002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.248083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.248098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.248251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.248266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.248466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.248482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.248687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.248702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.248925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.248941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.249094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.249109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.249188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.249203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.249277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.249292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.249502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.249521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.249666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.249681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.249774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.249789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.250016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.250032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.250205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.250221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.132 [2024-07-12 11:44:30.250430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.132 [2024-07-12 11:44:30.250446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.132 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.250599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.250615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.250834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.250849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.251017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.251033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.251207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.251222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.251432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.251447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.251659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.251675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.251825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.251991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.252006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.252156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.252171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.252308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.252323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.252461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.252477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.252652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.252668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.252802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.252817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.252989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.253003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.253139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.253155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.253359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.253375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.253554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.253568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.253668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.253684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.253817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.253832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.253975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.253991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.254194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.254209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.254309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.254334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.254550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.254580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.254780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.254806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.255018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.255038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.255166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.255186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.255366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.133 [2024-07-12 11:44:30.255400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.133 qpair failed and we were unable to recover it. 00:38:44.133 [2024-07-12 11:44:30.255588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.255608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.255786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.255806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.256011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.256031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.256269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.256289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.256451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.256471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.256720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.256739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.256920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.256942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.257201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.257224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.257444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.257464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.257631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.257651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.257772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.257792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.258058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.258078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.258263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.258283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.258444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.258464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.258632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.258652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.258758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.258778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.258923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.258944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.259103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.259124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.259337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.259358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.259552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.259573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.259768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.259788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.260021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.260041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.260135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.260155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.260259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.260278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.260430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.260451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.260614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.260634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.260719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.260739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.260985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.261005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.261240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.261259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.134 qpair failed and we were unable to recover it. 00:38:44.134 [2024-07-12 11:44:30.261444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.134 [2024-07-12 11:44:30.261465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.261577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.261597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.261858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.261878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.262025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.262045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.262206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.262226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.262442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.262460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.262627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.262643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.262847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.262862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.263031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.263046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.263273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.263288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.263394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.263410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.263571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.263588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.263681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.263697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.263846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.263862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.263936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.263951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.264030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.264045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.264271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.264286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.264512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.264528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.264778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.264796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.265020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.265035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.265182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.265198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.265342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.265357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.265514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.265529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.265666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.265682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.265825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.265840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.266061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.266078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.266248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.266270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.266491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.266507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.266679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.266695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.266898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.266913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.267007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.267022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.135 qpair failed and we were unable to recover it. 00:38:44.135 [2024-07-12 11:44:30.267262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.135 [2024-07-12 11:44:30.267277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.267515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.267530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.267777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.267793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.268014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.268029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.268196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.268211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.268401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.268417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.268556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.268570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.268639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.268655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.268832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.268847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.269071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.269086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.269182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.269197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.269409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.269425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.269565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.269580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.269799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.269815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.270039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.270054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.270196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.270212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.270426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.270441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.270675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.270691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.270777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.270792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.271035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.271050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.271191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.271206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.271358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.271373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.271554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.271570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.271649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.271663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.271864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.271878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.272051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.272066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.272292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.272307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.272522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.272540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.272699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.272714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.272901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.272916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.136 [2024-07-12 11:44:30.273098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.136 [2024-07-12 11:44:30.273113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.136 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.273337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.273351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.273620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.273635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.273858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.273873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.274023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.274038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.274117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.274133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.274361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.274375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.274530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.274544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.274638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.274653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.274883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.274897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.275147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.275162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.275367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.275389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.275598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.275613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.275695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.275710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.275915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.275929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.276044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.276060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.276261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.276276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.276420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.276436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.276572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.276588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.276803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.276818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.276977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.276992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.277196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.277211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.277371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.277390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.277608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.277623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.277716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.277733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.277886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.277901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.278098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.278113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.278260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.278274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.278531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.278547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.278745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.278765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.278973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.278989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.279265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.137 [2024-07-12 11:44:30.279280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.137 qpair failed and we were unable to recover it. 00:38:44.137 [2024-07-12 11:44:30.279434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.279450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.279592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.279608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.279807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.279821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.279954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.279968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.280201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.280217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.280370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.280395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.280503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.280518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.280692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.280707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.280842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.280857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.281062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.281078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.281222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.281238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.281335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.281350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.281503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.281519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.281686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.281701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.281845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.281859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.281950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.281966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.282169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.282183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.282387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.282403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.282620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.282635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.282869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.282885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.283095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.283110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.283329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.283344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.283420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.283435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.283579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.283594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.283821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.283836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.283988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.284003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.284224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.284239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.284398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.284413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.284636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.284653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.284873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.284888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.285114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.138 [2024-07-12 11:44:30.285129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.138 qpair failed and we were unable to recover it. 00:38:44.138 [2024-07-12 11:44:30.285345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.285360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.285584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.285602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.285893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.285907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.286107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.286121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.286257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.286272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.286415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.286430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.286607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.286623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.286793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.286808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.287010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.287025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.287232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.287248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.287543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.287558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.287701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.287717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.287935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.287950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.288085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.288100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.288251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.288265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.288415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.288431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.288669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.288684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.288785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.288801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.288982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.288997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.289241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.289256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.289410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.289425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.289639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.289654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.289867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.289882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.290086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.139 [2024-07-12 11:44:30.290102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.139 qpair failed and we were unable to recover it. 00:38:44.139 [2024-07-12 11:44:30.290388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.290403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.290485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.290500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.290721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.290736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.290939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.290954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.291236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.291251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.291490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.291507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.291607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.291627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.291775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.291791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.291951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.291966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.292051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.292066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.292212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.292227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.292368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.292388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.292534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.292549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.292775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.292791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.292928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.292943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.293096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.293111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.293315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.293331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.293556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.293576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.293648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.293663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.293808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.293823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.294061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.294077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.294303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.294319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.294571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.294586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.294722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.294737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.294894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.294909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.295013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.295028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.295236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.295252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.295393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.295409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.295493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.295508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.295708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.295724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.140 [2024-07-12 11:44:30.295858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.140 [2024-07-12 11:44:30.295872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.140 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.295973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.295988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.296245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.296259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.296486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.296502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.296668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.296683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.296885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.296900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.297021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.297037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.297262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.297277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.297423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.297438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.297508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.297523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.297677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.297692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.297783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.297798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.297934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.297949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.298015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.298030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.298194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.298210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.298344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.298359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.298498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.298514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.298764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.298779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.298979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.298994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.299254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.299268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.299414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.299430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.299582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.299597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.299810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.299825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.299970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.299985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.300116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.300131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.300349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.300365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.300549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.300565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.300721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.300738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.300974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.300989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.301139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.301152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.301337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.301352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.301528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.141 [2024-07-12 11:44:30.301543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.141 qpair failed and we were unable to recover it. 00:38:44.141 [2024-07-12 11:44:30.301701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.301717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.301868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.301883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.302104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.302119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.302263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.302278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.302364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.302384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.302615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.302630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.302762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.302776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.303014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.303030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.303181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.303200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.303291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.303306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.303509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.303524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.303668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.303683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.303915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.303931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.304099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.304114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.304371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.304397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.304557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.304572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.304793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.304808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.305045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.305060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.305215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.305230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.305431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.305446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.305540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.305555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.305707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.305723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.305875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.305890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.306143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.306158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.306244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.306258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.306434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.306450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.306682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.306697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.306848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.306863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.142 [2024-07-12 11:44:30.307070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.142 [2024-07-12 11:44:30.307085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.142 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.307225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.307241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.307472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.307487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.307723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.307737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.307905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.307920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.308142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.308157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.308368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.308387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.308633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.308699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.308916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.308931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.309104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.309118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.309273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.309289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.309443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.309458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.309678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.309694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.309899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.309914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.310067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.310082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.310284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.310300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.310495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.310510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.310735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.310750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.310888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.310903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.311082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.311096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.311252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.311267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.311443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.311458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.311690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.311708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.311900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.311914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.312066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.312081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.312233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.312248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.312401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.312416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.312657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.312672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.312876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.312891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.313135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.313150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.143 [2024-07-12 11:44:30.313362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.143 [2024-07-12 11:44:30.313381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.143 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.313646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.313661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.313812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.313826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.314055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.314070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.314304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.314319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.314482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.314497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.314655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.314670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.314815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.314829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.314967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.314982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.315119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.315133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.315355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.315370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.315521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.315537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.315698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.315713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.315935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.315950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.316102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.316121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.316257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.316273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.316437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.316453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.316542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.316559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.316783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.316798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.316889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.316904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.317153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.317168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.317314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.317329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.317554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.317569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.317781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.317796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.317941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.317955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.318095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.318111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.318196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.318211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.318365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.318384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.318533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.318548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.318695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.318710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.318884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.318899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.319064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.319079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.144 [2024-07-12 11:44:30.319230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.144 [2024-07-12 11:44:30.319246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.144 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.319387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.319403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.319485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.319500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.319652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.319667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.319879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.319894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.320075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.320089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.320345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.320360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.320591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.320606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.320830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.320844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.320931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.320946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.321173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.321188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.321407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.321422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.321584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.321599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.321803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.321818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.321901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.321916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.322051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.322067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.322267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.322282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.322432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.322447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.322601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.322616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.322763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.322778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.322995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.323009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.323146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.323162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.323243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.323257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.323501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.145 [2024-07-12 11:44:30.323516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.145 qpair failed and we were unable to recover it. 00:38:44.145 [2024-07-12 11:44:30.323666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.323680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.323829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.323847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.323939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.323953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.324155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.324170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.324339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.324354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.324597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.324613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.324745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.324760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.324969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.324984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.325156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.325171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.325258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.325272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.325428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.325445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.325588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.325602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.325705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.325720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.325876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.325891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.326083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.326098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.326278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.326294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.326371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.326390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.326563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.326578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.326731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.326746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.326890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.326906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.327136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.327151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.327300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.327314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.327454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.327474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.327557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.327571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.327641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.327655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 A controller has encountered a failure and is being reset. 00:38:44.146 [2024-07-12 11:44:30.327858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.327884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.328112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.328139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.328246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.328261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.328476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.328491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.328721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.328736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.328900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.328914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.329058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.329073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.146 [2024-07-12 11:44:30.329250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.146 [2024-07-12 11:44:30.329265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.146 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.329490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.329506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.329679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.329693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.329804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.329972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.329987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.330131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.330146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.330371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.330391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.330536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.330552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.330737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.330752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.330919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.330934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.331140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.331156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.331290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.331304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.331549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.331565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.331764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.331779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.331998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.332013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.332255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.332271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.332411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.332427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.332667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.332682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.332841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.332856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.333001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.333015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.333242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.333258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.333496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.333511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.333600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.333616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.333864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.333879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.334118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.334133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.334360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.334375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.334526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.334540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.334718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.334734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.334932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.334946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.335167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.335182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.335271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.335286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.335497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.335513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.335739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.335754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.335841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.147 [2024-07-12 11:44:30.335856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.147 qpair failed and we were unable to recover it. 00:38:44.147 [2024-07-12 11:44:30.336066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.336084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.336184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.336199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.336452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.336467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.336624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.336638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.336813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.336829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.337027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.337042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.337256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.337271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.337420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.337436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.337682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.337697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.337832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.337847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.338074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.338089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.338272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.338287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.338490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.338506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.338709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.338724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.338901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.338916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.338995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.339012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.339149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.339164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.339305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.339319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.339457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.339472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.339671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.339686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.339820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.339835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.340012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.340029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.340260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.340280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.340456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.340472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.340640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.340655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.340879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.340894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.341073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.341088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.341347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.341362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.341570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.341585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.341726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.341742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.341941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.341955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.342113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.342128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.342353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.342368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.148 qpair failed and we were unable to recover it. 00:38:44.148 [2024-07-12 11:44:30.342604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.148 [2024-07-12 11:44:30.342619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.342870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.342885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.343113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.343128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.343296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.343311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.343460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.343476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.343742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.343758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.343919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.343933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.344136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.344152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.344352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.344367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.344545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.344561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.344721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.344735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.344934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.344950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.345170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.345185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.345420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.345436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.345588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.345603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.345792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.345807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.346040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.346055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.346130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.346145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.346243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.346258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.346494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.346511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.346662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.346676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.346856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.346871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.347018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.347035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.347253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.347269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.347480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.347496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.347598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.347612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.347714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.347729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.347869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.347884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.348039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.348054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.149 [2024-07-12 11:44:30.348226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.149 [2024-07-12 11:44:30.348241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.149 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.348385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.348400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.348548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.348563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.348737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.348751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.348834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.348849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.349023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.349037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.349173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.349188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.349392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.349408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.349556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.349571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.349666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.349680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.349846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.349862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.350032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.350047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.350193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.350209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.350420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.350436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.350667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.350683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.350924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.350939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.351079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.351094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.351227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.351320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.351335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.351535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.351550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.351773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.351787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.351939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.351954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.352159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.352174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.352328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.352342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.352563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.150 [2024-07-12 11:44:30.352584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.150 qpair failed and we were unable to recover it. 00:38:44.150 [2024-07-12 11:44:30.352660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.352674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.352808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.352822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.352978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.352992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.353199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.353213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.353445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.353461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.353542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.353557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.353691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.353706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.353903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.353918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.354147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.354165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.354431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.354446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.354668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.354683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.354817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.354831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.355057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.355072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.355219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.355234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.355335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.355351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.355524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.355540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.355612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.355627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.355782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.355797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.355868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.355883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.356068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.356083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.356326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.356340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.356568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.356583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.356838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.356853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.356990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.357005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.357249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.357264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.357488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.357504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.357647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.357662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.357876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.357891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.358116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.358131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.358199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.358214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.151 [2024-07-12 11:44:30.358286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.151 [2024-07-12 11:44:30.358301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.151 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.358456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.358471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.358624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.358638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.358772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.358788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.359026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.359042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.359206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.359221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.359484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.359499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.359671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.359686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.359835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.359851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.360049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.360065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.360152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.360170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.360381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.360398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.360579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.360595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.360727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.360743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.360954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.360969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.361154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.361168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.361324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.361340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.361523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.361539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.361764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.361783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.361882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.361897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.362033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.362048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.362126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.362141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.362354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.362370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.362464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.362479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.362639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.362654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.362816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.362831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.362982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.362997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.363260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.363276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.363428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.363443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.363671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.363686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.363820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.363835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.364090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.152 [2024-07-12 11:44:30.364106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.152 qpair failed and we were unable to recover it. 00:38:44.152 [2024-07-12 11:44:30.364347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.364363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.364544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.364570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.364792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.364812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.364979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.364998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.365084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.365105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.365214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.365234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.365401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.365422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.365567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.365586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.365762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.365783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.365864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.365883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.366060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.366081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.366173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.366194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.366343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.366364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.366614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.366636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.366821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.366841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.367029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.367050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.367284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.367309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.367475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.367497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.367701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.367721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.367847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.367867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.368085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.368105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.368262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.368282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.368527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.368547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.368782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.368802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.368944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.368965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.369150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.369171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.369419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.369442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.369674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.369695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.369788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.369810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.370077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.370100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.153 [2024-07-12 11:44:30.370256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.153 [2024-07-12 11:44:30.370272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.153 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.370503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.370518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.370661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.370676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.370781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.370796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.370963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.370978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.371132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.371146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.371311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.371325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.371583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.371599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.371686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.371701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.371885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.371899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.372038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.372053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.372160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.372174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.372312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.372553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.372568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.372783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.372798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.372962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.372977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.373190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.373205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.373406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.373422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.373652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.373667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.373809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.373824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.373973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.373987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.374215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.374230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.374451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.374467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.374676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.374691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.374911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.374926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.375061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.375075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.375221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.375237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.375380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.375396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.375621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.375637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.375805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.375820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.376026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.376041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.376268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.376283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.376504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.376519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.154 qpair failed and we were unable to recover it. 00:38:44.154 [2024-07-12 11:44:30.376668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.154 [2024-07-12 11:44:30.376685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.376834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.376849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.377095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.377110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.377317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.377335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.377480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.377496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.377652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.377668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.377802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.377818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.377975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.377989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.378226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.378241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.378387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.378402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.378555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.378569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.378789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.378804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.379028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.379043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.379124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.379140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.379341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.379357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.379456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.379472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.379654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.379669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.379897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.379913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.380051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.380066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.380308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.380323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.380536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.380552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.380631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.380647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.380815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.380830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.380979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.380994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.381130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.381144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.381374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.381394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.381483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.381498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.381717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.381732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.381881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.381895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.382005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.382020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.382229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.382264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.382337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.382353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.382531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.382547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.382723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.382739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.382827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.382842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.155 [2024-07-12 11:44:30.382933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.155 [2024-07-12 11:44:30.382948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.155 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.383100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.383115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.383343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.383358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.383507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.383522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.383673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.383688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.383780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.383795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.383865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.383880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.383963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.383978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.384970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.384985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.385139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.385154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.385295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.385309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.385405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.385420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.385518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.385534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.385675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.385690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.385760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.385775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.385864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.385879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.386016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.386031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.386113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.386128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.386279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.386294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.386428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.386444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.386515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.386530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.386679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.156 [2024-07-12 11:44:30.386694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.156 qpair failed and we were unable to recover it. 00:38:44.156 [2024-07-12 11:44:30.386848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.386863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.386957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.386973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.387176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.387192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.387339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.387354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.387528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.387543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.387616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.387630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.387727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.387751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.387903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.387929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.388090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.388110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.388203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.388222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.388374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.388400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.388569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.388588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.388763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.388782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.388931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.388951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.389183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.389203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.389304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.389323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.389438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.389458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.389557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.389577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.389668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.389688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.389882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.389907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.390914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.390933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.391071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.391086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.391247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.391261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.391408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.391423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.157 qpair failed and we were unable to recover it. 00:38:44.157 [2024-07-12 11:44:30.391568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.157 [2024-07-12 11:44:30.391584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.391672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.391686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.391768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.391783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.391867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.391883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.391968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.391983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.392122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.392137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.392306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.392321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.392483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.392499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.392646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.392661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.392730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.392745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.392811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.392826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.392991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.393007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.393145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.393160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.393309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.393324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.393462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.393478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.393654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.393677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.393778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.393801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.393970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.393990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.394103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.394124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.394289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.394321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.394552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.394574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.394688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.394708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.394926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.394946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.395168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.395189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.395352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.395530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.395550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.395710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.395731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.395843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.395863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.396046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.396070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.396160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.396181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.396347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.396368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.396446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.396466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.396626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.396642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.396767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.396784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.396961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.396977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.397059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.397078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.397173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.397188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.397271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.397287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.397496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.397512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.397713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.397729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.397865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.397881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.398046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.398062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.398267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.398283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.398401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.398417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.398509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.398524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.398741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.398757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.398832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.398847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.399074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.399090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.399248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.399264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.399406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.399422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.399582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.399598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.158 qpair failed and we were unable to recover it. 00:38:44.158 [2024-07-12 11:44:30.399720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.158 [2024-07-12 11:44:30.399737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.399906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.399922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.400001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.400016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.400153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.400168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.400254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.400282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.400530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.400551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.400641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.400661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.400753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.400774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.400887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.400909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.401004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.401025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.401110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.401130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.401241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.401263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.401511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.401533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.401741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.401758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.401849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.401866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.401943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.401958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.402047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.402064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.402226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.402245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.402341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.402356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.402518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.402536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.402674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.402690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.402760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.402775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.402964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.402980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.403122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.403138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.403295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.403311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.403390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.403406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.403515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.403530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.403756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.403773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.404923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.404939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.405036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.405138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.405222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.405417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.405571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.405671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.405919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.405999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.406015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.406121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.406155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.406335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.406357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.406532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.406555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.406666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.406687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.406787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.406807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.406886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.406907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.407066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.407088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.407241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.407263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.407412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.407433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.407609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.407631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.159 [2024-07-12 11:44:30.407807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.159 [2024-07-12 11:44:30.407829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.159 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.407977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.407998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.408162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.408184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.408340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.408366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.408468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.408489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.408661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.408682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.408854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.408876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.408958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.408972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.409910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.409925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.410921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.410936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.411900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.411920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.412080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.412100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.412186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.412206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.412302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.412323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.412483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.412509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.412667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.412687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.412848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.412868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.413035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.413053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.413204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.413219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.413364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.413388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.413615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.413630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.413779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.413793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.413939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.413956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.414098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.414113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.414199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.414214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.160 qpair failed and we were unable to recover it. 00:38:44.160 [2024-07-12 11:44:30.414308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.160 [2024-07-12 11:44:30.414324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.414530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.414545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.414622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.414637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.414770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.414785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.414932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.414947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.415096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.415111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.415267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.415282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.415361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.415376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.415618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.415633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.415770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.415785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.415924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.415938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.416009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.416024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.416094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.416108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.416250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.416265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.416421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.416437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.416571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.416587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.416789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.416803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.416885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.416900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.417101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.417116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.417192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.417208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.417288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.417304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.417445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.417460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.417598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.417614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.417755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.417770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.417945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.417968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.418125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.418145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.418235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.418254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.418364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.418390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.418493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.418514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.418603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.418623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.418772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.418792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.418956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.418976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.419052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.419068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.419275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.419291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.419450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.419466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.419560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.419576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.419665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.419680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.419821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.419836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.419996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.420080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.420231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.420395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.420500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.420612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.420766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.420938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.420953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.421111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.421126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.421333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.421348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.421432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.421447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.421524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.421539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.421673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.421688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.421830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.421845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.161 [2024-07-12 11:44:30.421932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.161 [2024-07-12 11:44:30.421952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.161 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.422161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.422177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.422251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.422267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.422421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.422437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.422515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.422531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.422612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.422628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.422697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.422713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.422891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.422906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.423039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.423053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.423204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.423218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.423295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.423310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.423405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.423420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.423506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.423524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.423590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.423605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.446 [2024-07-12 11:44:30.423688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.446 [2024-07-12 11:44:30.423703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.446 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.423859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.423874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.424921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.424995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.425963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.425978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.426117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.426132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.426207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.426222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.426294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.426309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.426466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.426482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.426683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.426698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.426914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.426929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.427012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.427027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.427187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.427203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.427433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.427448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.427653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.427668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.427815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.427830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.427906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.427922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.428076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.428091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.428254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.428269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.428436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.428452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.428537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.428552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.428705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.428721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.428811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.428826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.428913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.428930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.429032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.429047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.429116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.447 [2024-07-12 11:44:30.429131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.447 qpair failed and we were unable to recover it. 00:38:44.447 [2024-07-12 11:44:30.429211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.429227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.429406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.429422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.429560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.429575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.429665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.429679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.429773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.429789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.429863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.429877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.430044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.430130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.430241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.430478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.430593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.430693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.430854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.430996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.431925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.431991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.432970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.432985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.433884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.448 [2024-07-12 11:44:30.433900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.448 qpair failed and we were unable to recover it. 00:38:44.448 [2024-07-12 11:44:30.434086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.434101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.434185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.434201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.434408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.434423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.434586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.434601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.434687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.434703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.434804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.434819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.434960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.434975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.435126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.435141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.435232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.435247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.435333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.435347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.435561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.435576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.435717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.435733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.435881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.435896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.435992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.436007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.436142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.436157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.436245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.436260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.436405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.436421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.436503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.436518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.436722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.436737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.436910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.436924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.436998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.437162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.437335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.437490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.437591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.437690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.437846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.437945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.437960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.438038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.438053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.438128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.438143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.438240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.438254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.438456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.438472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.438546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.438562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.438698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.438721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.438954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.438970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.439107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.439122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.439220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.439236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.439469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.439489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.439656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.439671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.439877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.439892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.449 [2024-07-12 11:44:30.439978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.449 [2024-07-12 11:44:30.439993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.449 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.440918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.440991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.441873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.441888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.442963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.442978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.443127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.443142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.443223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.443238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.443307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.443323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.443459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.443474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.443554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.443569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.443731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.443746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.443827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.443842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.444047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.444062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.450 [2024-07-12 11:44:30.444141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.450 [2024-07-12 11:44:30.444156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.450 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.444236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.444251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.444336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.444351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.444440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.444458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.444542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.444557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.444627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.444642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.444722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.444737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.444871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.444886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.445110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.445124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.445286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.445302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.445397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.445414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.445559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.445575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.445741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.445757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.445903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.445917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.446903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.446918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.447893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.447907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.448927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.448999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.449014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.451 qpair failed and we were unable to recover it. 00:38:44.451 [2024-07-12 11:44:30.449153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.451 [2024-07-12 11:44:30.449168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.449317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.449332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.449491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.449507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.449601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.449616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.449698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.449715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.449853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.449870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.449957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.449972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.450966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.450982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.451130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.451145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.451282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.451297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.451375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.451396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.451469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.451484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.451684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.451699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.451785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.451800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.451947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.451962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.452901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.452916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.453172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.453188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.453401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.453416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.453570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.453586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.453669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.453684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.453773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.453788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.453922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.453937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.454142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.454156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.454242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.454257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.454415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.454430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.454647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.454662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.454818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.452 [2024-07-12 11:44:30.454833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.452 qpair failed and we were unable to recover it. 00:38:44.452 [2024-07-12 11:44:30.454916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.454930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.455063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.455078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.455210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.455224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.455295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.455312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.455395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.455411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.455640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.455654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.455729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.455745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.455885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.455905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.456814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.456829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.457056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.457071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.457243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.457258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.457427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.457443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.457513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.457528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.457683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.457698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.457845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.457860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.458818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.458833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.459094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.459128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.459389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.459416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.459571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.459592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.459674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.459694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.459861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.459881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.460042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.460064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.460174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.460194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.460289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.460310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.460406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.460422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.460598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.460613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.453 [2024-07-12 11:44:30.460818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.453 [2024-07-12 11:44:30.460832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.453 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.460919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.460934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.461022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.461037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.461111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.461128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.461199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.461214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.461368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.461386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.461610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.461625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.461693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.461708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.461872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.461887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.462053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.462167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.462256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.462428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.462586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.462680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.462843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.462986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.463091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.463189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.463284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.463434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.463519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.463737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.463928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.463943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.464052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.464143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.464331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.464419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.464594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.464761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.464877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.464981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.465202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.465335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.465518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.465631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.465730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.465820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.465975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.465988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.466104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.466117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.466261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.466275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.454 qpair failed and we were unable to recover it. 00:38:44.454 [2024-07-12 11:44:30.466347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.454 [2024-07-12 11:44:30.466360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.466453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.466467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.466580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.466600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.466744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.466757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.466825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.466839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.466981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.466994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.467065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.467077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.467147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.467171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.467397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.467411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.467565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.467578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.467677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.467690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.467838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.467851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.468008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.468020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.468164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.468177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.468317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.468331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.468536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.468549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.468637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.468649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.468846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.468859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.468926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.468939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.469093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.469106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.469359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.469372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.469524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.469537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.469614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.469626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.469693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.469706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.469789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.469802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.469955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.469969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.470124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.470137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.470234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.470246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.470320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.470333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.470506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.470531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.470625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.470648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.470760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.470781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.470875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.470895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.471072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.471091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.455 qpair failed and we were unable to recover it. 00:38:44.455 [2024-07-12 11:44:30.471244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.455 [2024-07-12 11:44:30.471263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.471385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.471404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.471509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.471527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.471710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.471728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.471805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.471818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.471903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.471916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.472953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.472967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.473982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.473995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.474139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.474152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.474299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.474312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.474387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.474401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.474545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.474558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.474691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.474704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.474847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.474861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.474994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.475885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.475994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.476013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.476164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.476183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.476261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.476279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.476454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.456 [2024-07-12 11:44:30.476475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.456 qpair failed and we were unable to recover it. 00:38:44.456 [2024-07-12 11:44:30.476634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.476653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.476745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.476763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.476921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.476939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.477116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.477137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.477224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.477242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.477401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.477420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.477526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.477545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.477717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.477738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.477897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.477915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.478986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.478999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.479140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.479153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.479222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.479235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.479483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.479497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.479653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.479666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.479821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.479834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.479904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.479917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.480056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.480069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.480226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.480239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.480386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.480400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.480605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.480618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.480681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.480694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.480848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.480861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.481028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.481042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.481117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.481130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.481204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.481217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.481457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.481471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.481581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.481603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.481794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.481816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.482001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.482020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.482095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.482113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.482280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.482298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.482451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.482470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.482567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.457 [2024-07-12 11:44:30.482586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.457 qpair failed and we were unable to recover it. 00:38:44.457 [2024-07-12 11:44:30.482690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.482708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.482810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.482828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.482924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.482943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.483023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.483041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.483188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.483206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.483317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.483331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.483452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.483617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.483630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.483777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.483791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.484963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.484976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.485103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.485117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.485249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.485262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.485330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.485342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.485436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.485450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.485529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.485542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.485676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.485689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.485918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.485931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.486978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.486990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.487195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.487209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.487373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.487398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.487548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.487567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.487736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.487755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.487863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.487881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.487985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.488003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.488193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.488211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.488315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.488333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.458 qpair failed and we were unable to recover it. 00:38:44.458 [2024-07-12 11:44:30.488514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.458 [2024-07-12 11:44:30.488534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.488626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.488645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.488818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.488836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.488944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.488962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.489943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.489961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.490069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.490088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.490235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.490254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.490424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.490444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.490610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.490630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.490717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.490735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.490916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.490934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.491094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.491113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.491263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.491291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.491460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.491480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.491661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.491680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.491858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.491876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.492036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.492054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.492128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.492146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.492291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.492309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.492528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.492546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.492649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.492667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.492761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.492779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.492879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.492897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.493139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.493157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.493258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.493276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.493387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.493407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.493558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.493580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.493742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.459 [2024-07-12 11:44:30.493760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.459 qpair failed and we were unable to recover it. 00:38:44.459 [2024-07-12 11:44:30.493853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.493871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.493956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.493975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.494127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.494144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.494290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.494302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.494397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.494411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.494567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.494582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.494656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.494669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.494767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.494780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.494925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.494938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.495972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.495986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.496980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.496993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.497151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.497165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.497302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.497315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.497427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.497441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.497514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.460 [2024-07-12 11:44:30.497527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.460 qpair failed and we were unable to recover it. 00:38:44.460 [2024-07-12 11:44:30.497612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.497625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.497782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.497796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.497952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.497965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.498900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.498913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.499930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.499944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.500031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.500044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.500109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.500122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.500202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.500216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.500353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.500366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.500515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.500537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.500692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.500710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.500908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.500927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.501024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.501043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.501200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.501218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.501371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.461 [2024-07-12 11:44:30.501394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.461 qpair failed and we were unable to recover it. 00:38:44.461 [2024-07-12 11:44:30.501605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.501623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.501776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.501794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.501953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.501971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.502278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.502297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.502459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.502478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.502600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.502618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.502726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.502744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.502852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.502871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.503035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.503053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.503305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.503324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.503475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.503495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.503681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.503700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.503851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.503869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.503981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.504090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.504172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.504323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.504413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.504565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.504660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.504888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.504902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.505139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.505153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.505252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.505265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.505353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.505367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.505471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.505484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.505640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.505653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.505723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.505736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.505945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.505958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.506058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.506071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.506214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.506228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.506313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.506327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.506496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.462 [2024-07-12 11:44:30.506509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.462 qpair failed and we were unable to recover it. 00:38:44.462 [2024-07-12 11:44:30.506614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.506627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.506697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.506717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.506791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.506804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.506946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.506958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.507969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.507989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.508946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.508960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.509944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.509957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.510038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.510051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.510230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.463 [2024-07-12 11:44:30.510243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.463 qpair failed and we were unable to recover it. 00:38:44.463 [2024-07-12 11:44:30.510327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.510340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.510442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.510456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.510556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.510570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.510708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.510721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.510793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.510805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.510945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.510958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.511902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.511916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.512737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.512751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.464 qpair failed and we were unable to recover it. 00:38:44.464 [2024-07-12 11:44:30.513978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.464 [2024-07-12 11:44:30.513992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.514977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.514990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.515211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.515305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.515404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.515506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.515660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.515768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.515854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.515995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.516007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.516174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.516187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.516272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.516285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.516386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.516399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.516486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.465 [2024-07-12 11:44:30.516499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.465 qpair failed and we were unable to recover it. 00:38:44.465 [2024-07-12 11:44:30.516575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.516587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.516660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.516673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.516759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.516772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.516841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.516854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.516947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.516963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.517923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.517936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.518922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.518934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.466 [2024-07-12 11:44:30.519815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.466 qpair failed and we were unable to recover it. 00:38:44.466 [2024-07-12 11:44:30.519898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.519915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.519978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.519992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.520903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.520915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.521960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.521973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.522054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.522067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.522145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.522158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.522305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.522318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.522580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.522594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.522685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.522721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.522793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.522806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.522974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.522988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.523058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.523082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.523145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.523158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.523295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.467 [2024-07-12 11:44:30.523309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.467 qpair failed and we were unable to recover it. 00:38:44.467 [2024-07-12 11:44:30.523395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.523409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.523553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.523568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.523630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.523643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.523736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.523749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.523821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.523833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.523965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.523978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.524914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.524927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.525952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.525966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.526921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.526933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.468 [2024-07-12 11:44:30.527005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.468 [2024-07-12 11:44:30.527018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.468 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.527951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.527965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.528881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.528894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.529944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.529957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.530023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.530041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.530118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.530131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.530203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.530215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.530350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.530364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.469 qpair failed and we were unable to recover it. 00:38:44.469 [2024-07-12 11:44:30.530437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.469 [2024-07-12 11:44:30.530450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.530601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.530615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.530685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.530698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.530794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.530808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.530893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.530906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.530974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.530987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.531933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.531946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.532974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.532986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.533059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.533071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.470 [2024-07-12 11:44:30.533153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.470 [2024-07-12 11:44:30.533166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.470 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.533867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.533881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.534981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.534993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.535975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.535988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.536068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.536081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.536257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.536270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.471 qpair failed and we were unable to recover it. 00:38:44.471 [2024-07-12 11:44:30.536348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.471 [2024-07-12 11:44:30.536361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.536436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.536449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.536600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.536613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.536686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.536699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.536787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.536800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.536886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.536901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.536989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.537006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.537139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.537152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.537233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.537245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.537386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.537400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.537527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.537541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.537742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.537755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.537828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.537841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.538015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.538028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.538105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.538118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.538287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.538300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.538475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.538489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.538593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.538606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.538807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.538819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.538901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.538913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.539060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.539073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.539248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.539260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.539436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.539450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.539526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.539538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.539676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.539690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.539770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.539782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.540032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.540047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.540236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.540250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.540444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.540458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.540673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.540686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.540771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.472 [2024-07-12 11:44:30.540784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.472 qpair failed and we were unable to recover it. 00:38:44.472 [2024-07-12 11:44:30.541011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.541196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.541208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.541298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.541310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.541464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.541477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.541565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.541577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.541726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.541739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.541955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.541968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.542139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.542152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.542456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.542470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.542552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.542565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.542739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.542752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.542910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.542923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.543143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.543156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.543364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.543400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.543492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.543507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.543599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.543611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.543792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.543806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.543945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.543958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.544923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.544935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.545029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.545043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.545140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.545152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.545238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.545250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.545395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.545408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.545496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.473 [2024-07-12 11:44:30.545509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.473 qpair failed and we were unable to recover it. 00:38:44.473 [2024-07-12 11:44:30.545608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.545621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.545697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.545710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.545800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.545814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.545915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.545928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.546133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.546147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.546229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.546242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.546395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.546409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.546532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.546545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.546746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.546767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.546873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.546887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.547008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.547021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.547271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.547285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.547370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.547386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.547467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.547479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.547683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.547697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.547875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.547888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.547993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.548101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.548253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.548344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.548449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.548636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.548849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.548956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.548971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.549198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.549211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.549358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.549370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.549517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.549530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.549687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.549701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.549787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.549800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.549997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.550011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.550204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.550216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.550305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.474 [2024-07-12 11:44:30.550318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.474 qpair failed and we were unable to recover it. 00:38:44.474 [2024-07-12 11:44:30.550521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.550536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.550734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.550747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.550888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.550901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.551003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.551017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.551121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.551133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.551289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.551303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.551415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.551428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.551548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.551561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.551754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.551768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.551860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.551873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.552131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.552145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.552308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.552321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.552490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.552504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.552582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.552594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.552673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.552702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.552804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.552816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.552910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.552923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.553976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.553990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.554134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.554147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.554223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.554237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.554314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.554327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.554496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.554511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.554590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.554602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.475 qpair failed and we were unable to recover it. 00:38:44.475 [2024-07-12 11:44:30.554712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.475 [2024-07-12 11:44:30.554725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.554836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.554849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.554923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.554935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.555925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.555937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.556137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.556151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.556357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.556371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.556481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.556494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.556578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.556591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.556678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.556691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.556778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.556792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.556902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.556915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.557141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.557154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.557298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.557311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.557463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.557477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.557647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.557662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.557750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.557762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.476 qpair failed and we were unable to recover it. 00:38:44.476 [2024-07-12 11:44:30.557907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.476 [2024-07-12 11:44:30.557920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.558188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.558200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.558339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.558352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.558513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.558527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.558663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.558679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.558773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.558785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.558888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.558902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.559074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.559087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.559228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.559240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.559374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.559393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.559556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.559569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.559664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.559677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.559757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.559770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.559864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.559877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.560079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.560092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.560177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.560190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.560425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.560439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.560530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.560543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.560791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.560804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.560941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.560956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.561213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.561228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.561364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.561381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.561486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.561499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.561652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.561666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.561750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.561763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.561854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.561867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.561972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.561986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.562089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.562103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.562186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.562200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.562410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.562424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.562593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.562607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.562707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.477 [2024-07-12 11:44:30.562720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.477 qpair failed and we were unable to recover it. 00:38:44.477 [2024-07-12 11:44:30.562918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.562931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.563162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.563176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.563404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.563417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.563514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.563527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.563618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.563631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.563734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.563747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.563840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.563853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.563983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.563996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.564141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.564153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.564231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.564244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.564411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.564425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.564526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.564539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.564692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.564709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.564809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.564822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.565946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.565959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.566209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.566222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.566313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.566326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.566466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.566480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.566566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.566580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.566687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.566700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.566775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.566789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.566874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.566886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.567094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.567107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.567190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.567203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.567346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.567359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.478 [2024-07-12 11:44:30.567485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.478 [2024-07-12 11:44:30.567499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.478 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.567646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.567658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.567743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.567756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.567844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.567857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.567955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.567968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.568111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.568124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.568220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.568232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.568318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.568331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.568474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.568487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.568588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.568601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.568745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.568758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.568859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.568872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.569060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.569074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.569243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.569256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.569387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.569400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.569503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.569516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.569625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.569638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.569739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.569752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.569843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.569856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.570037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.570050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.570216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.570231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.570457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.570471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.570570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.570583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.570671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.570684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.570776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.570788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.570885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.570899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.571126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.571140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.571440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.571454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.571536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.571548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.571648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.571661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.571795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.571808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.572108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.572121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.572267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.572282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.479 qpair failed and we were unable to recover it. 00:38:44.479 [2024-07-12 11:44:30.572358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.479 [2024-07-12 11:44:30.572370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.572551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.572565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.572717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.572729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.572820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.572832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.572933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.572947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.573144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.573158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.573302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.573315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.573420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.573433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.573529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.573541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.573639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.573652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.573750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.573764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.573863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.573876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.574946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.574959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.575930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.575947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.576032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.576045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.576128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.576141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.576364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.576381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.480 [2024-07-12 11:44:30.576555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.480 [2024-07-12 11:44:30.576568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.480 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.576738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.576750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.576842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.576855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.576954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.576967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.577141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.577153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.577297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.577311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.577550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.577565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.577663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.577676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.577829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.577841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.578104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.578117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.578291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.578304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.578386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.578399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.578567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.578581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.578672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.578685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.578829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.578842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.578933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.578945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.579136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.579149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.579224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.579237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.579343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.579356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.579513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.579526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.579674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.579687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.579782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.579795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.579884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.579898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.580088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.580101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.580279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.580292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.580474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.580488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.580587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.580599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.580696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.580709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.580934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.580947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.581108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.581121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.581367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.581386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.481 [2024-07-12 11:44:30.581486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.481 [2024-07-12 11:44:30.581499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.481 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.581673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.581685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.581793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.581805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.581972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.581986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.582125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.582138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.582300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.582316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.582473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.582488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.582583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.582596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.582776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.582789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.583034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.583047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.583264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.583277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.583439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.583453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.583540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.583553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.583706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.583719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.583814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.583828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:44.482 [2024-07-12 11:44:30.584064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.584080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.584215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.584228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:38:44.482 [2024-07-12 11:44:30.584392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.584407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.584512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.584525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:44.482 [2024-07-12 11:44:30.584618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.584641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.584751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.584763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:44.482 [2024-07-12 11:44:30.584899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.584915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.585136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.585150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.585291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.585305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.585506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.585520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.585670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.482 [2024-07-12 11:44:30.585683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.482 qpair failed and we were unable to recover it. 00:38:44.482 [2024-07-12 11:44:30.585783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.585796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.585891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.585904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.585991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.586004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.586188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.586202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.586400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.586415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.586573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.586585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.586684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.586698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.586791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.586804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.587979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.587992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.588143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.588156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.588332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.588347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.588426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.588440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.588541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.588554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.588643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.588656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.588826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.588839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.589954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.589968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.483 [2024-07-12 11:44:30.590051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.483 [2024-07-12 11:44:30.590063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.483 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.590160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.590172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.590275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.590289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.590460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.590473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.590559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.590572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.590650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.590662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.590739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.590753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.590854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.590867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.591984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.591997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.592157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.592170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.592388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.592402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.592507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.592520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.592660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.592674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.592902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.592916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.593005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.593018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.593171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.593185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.593280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.593293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.593442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.593456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.593544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.593556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.593717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.593731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.593825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.593844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.594088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.594102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.594246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.594259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.594434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.594448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.594557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.484 [2024-07-12 11:44:30.594570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.484 qpair failed and we were unable to recover it. 00:38:44.484 [2024-07-12 11:44:30.594651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.594664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.594765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.594778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.594863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.594877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.594975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.594988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.595219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.595232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.595313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.595326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.595485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.595498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.595594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.595607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.595701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.595713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.595873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.595887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.596047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.596153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.596340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.596513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.596631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.596742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.596844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.596995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.597959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.597971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.598056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.598070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.598150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.598162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.598227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.598239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.598317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.598330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.598420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.485 [2024-07-12 11:44:30.598434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.485 qpair failed and we were unable to recover it. 00:38:44.485 [2024-07-12 11:44:30.598530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.598543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.598629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.598643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.598729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.598742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.598820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.598834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.598918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.598934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.599948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.599961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.600948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.600962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.601051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.601064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.601169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.601182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.601314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.601327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.601400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.601414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.601497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.601515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.486 [2024-07-12 11:44:30.601595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.486 [2024-07-12 11:44:30.601608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.486 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.601750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.601763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.601835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.601849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.601989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.602974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.602988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.487 [2024-07-12 11:44:30.603778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.487 qpair failed and we were unable to recover it. 00:38:44.487 [2024-07-12 11:44:30.603864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.603877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.603951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.603964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.604941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.604954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.605900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.605914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.606953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.606966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.607031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.607043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.607121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.607134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.607202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.607215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.488 qpair failed and we were unable to recover it. 00:38:44.488 [2024-07-12 11:44:30.607293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.488 [2024-07-12 11:44:30.607308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.607395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.607409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.607543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.607556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.607748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.607761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.607831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.607844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.607919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.607931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.607999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.608948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.608961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.609943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.609956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.610049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.610061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.610136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.610149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.610224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.610237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.610315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.610327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.610400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.610413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.610488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.610502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.489 qpair failed and we were unable to recover it. 00:38:44.489 [2024-07-12 11:44:30.610578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.489 [2024-07-12 11:44:30.610591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.610728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.610741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.610809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.610822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.610892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.610905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.610981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.610994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.611956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.611969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.612926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.612938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.490 [2024-07-12 11:44:30.613802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.490 qpair failed and we were unable to recover it. 00:38:44.490 [2024-07-12 11:44:30.613888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.613905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.614969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.614983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.615832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.615987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.616142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.616235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.616387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.616533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.616625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.616811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.616908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.616922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.491 [2024-07-12 11:44:30.617695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.491 qpair failed and we were unable to recover it. 00:38:44.491 [2024-07-12 11:44:30.617765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.617778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.617856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.617869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.617937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.617951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.618975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.618987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:44.492 [2024-07-12 11:44:30.619244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:44.492 [2024-07-12 11:44:30.619648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.619911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.619925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.492 [2024-07-12 11:44:30.620001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.620015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.620099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.620112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.620190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.620205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:44.492 [2024-07-12 11:44:30.620334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.620347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.620420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.620433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.492 [2024-07-12 11:44:30.620503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.492 [2024-07-12 11:44:30.620516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.492 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.620598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.620611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.620677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.620690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.620764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.620776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.620875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.620889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.620957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.620975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.621958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.621970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.622883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.622992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.493 [2024-07-12 11:44:30.623622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.493 qpair failed and we were unable to recover it. 00:38:44.493 [2024-07-12 11:44:30.623691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.623703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.623773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.623786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.623850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.623862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.623939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.623952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.624917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.624930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.625919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.625997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.626079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.626159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.626240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.626339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.626461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.626547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.494 [2024-07-12 11:44:30.626659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.494 [2024-07-12 11:44:30.626673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.494 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.626753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.626772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.626843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.626856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.626932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.626945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.627963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.627976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.628928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.628995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.495 [2024-07-12 11:44:30.629875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.495 qpair failed and we were unable to recover it. 00:38:44.495 [2024-07-12 11:44:30.629941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.629954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.630911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.630987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.631955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.631968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.632946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.632960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.633028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.633041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.633153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.633166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.633236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.633249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.633324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.633337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.633412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.496 [2024-07-12 11:44:30.633426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.496 qpair failed and we were unable to recover it. 00:38:44.496 [2024-07-12 11:44:30.633501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.633515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.633585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.633598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.633675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.633688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.633755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.633768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.633847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.633860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.633927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.633940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.634963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.634976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.635931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.635945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.636020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.636033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.636145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.636159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.636232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.636245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.636388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.497 [2024-07-12 11:44:30.636402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.497 qpair failed and we were unable to recover it. 00:38:44.497 [2024-07-12 11:44:30.636477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.636491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.636562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.636576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.636675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.636693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.636790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.636803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.636877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.636890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.636963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.636978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.637863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.637997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.638945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.498 [2024-07-12 11:44:30.638963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.498 qpair failed and we were unable to recover it. 00:38:44.498 [2024-07-12 11:44:30.639043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.639929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.639998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.640982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.640995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.641936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.641949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.499 [2024-07-12 11:44:30.642863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.499 [2024-07-12 11:44:30.642877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.499 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.642956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.642970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.643913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.643990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.644972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.644984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.645052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.645064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.645163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.645177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.645315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.645327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.645480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.645493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.645649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.645662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.645754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.645793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.645866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.645878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.646971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.646984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.647053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.647065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.500 [2024-07-12 11:44:30.647150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.500 [2024-07-12 11:44:30.647162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.500 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.647299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.647314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.647397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.647411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.647549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.647563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.647635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.647647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.647719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.647732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.647808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.647822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.647967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.647981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.648859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.648871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.649984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.649996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.650958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.650970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.651041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.651054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.651124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.651137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.651282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.651295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.651362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.651375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.651482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.651494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.651585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.651598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.501 qpair failed and we were unable to recover it. 00:38:44.501 [2024-07-12 11:44:30.651668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.501 [2024-07-12 11:44:30.651681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.651823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.651835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.651903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.651915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.652975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.652988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.653971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.653984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.654893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.654989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.655003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.655165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.655178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.655247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.655260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.655325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.502 [2024-07-12 11:44:30.655338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.502 qpair failed and we were unable to recover it. 00:38:44.502 [2024-07-12 11:44:30.655428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.655440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.655524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.655537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.655736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.655749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.655817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.655830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.655906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.655919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.655999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.656926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.656942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.657892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.657905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.658955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.658968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.659105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.659117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.659202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.659214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.659288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.659300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.659386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.659401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.659536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.659548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.659616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.659629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.503 [2024-07-12 11:44:30.659696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.503 [2024-07-12 11:44:30.659708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.503 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.659840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.659852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.660963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.660976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.661959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.661978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.662836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.662849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.663972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.663984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.664119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.664132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.664209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.664222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.664295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.664307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.664380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.504 [2024-07-12 11:44:30.664393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.504 qpair failed and we were unable to recover it. 00:38:44.504 [2024-07-12 11:44:30.664480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.664493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.664568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.664581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.664669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.664682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.664872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.664885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.665904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.665916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.666965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.666979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.667912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.667926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.668932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.668945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.669032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.669045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.669116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.669128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.669210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.669223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.505 [2024-07-12 11:44:30.669294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.505 [2024-07-12 11:44:30.669307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.505 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.669403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.669417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.669510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.669523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.669601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.669614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.669701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.669713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.669795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.669808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.669893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.669906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.670952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.670965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.671915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.671999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.672896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.672909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.673066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.673079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.673159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.673172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.673247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.673259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.506 [2024-07-12 11:44:30.673336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.506 [2024-07-12 11:44:30.673348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.506 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.673502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.673516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.673597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.673609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.673697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.673710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.673779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.673791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.673928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.673941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.674932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.674947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.675923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.675936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.676008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.507 [2024-07-12 11:44:30.676020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.507 qpair failed and we were unable to recover it. 00:38:44.507 [2024-07-12 11:44:30.676177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.676190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.676284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.676297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.676491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.676505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.676603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.676615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.676709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.676724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.676801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.676814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.676891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.676903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.677964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.677977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.678914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.678927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.679089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.679103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.679175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.679187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.679256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.679269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.679348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.679364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.508 [2024-07-12 11:44:30.679458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.508 [2024-07-12 11:44:30.679470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.508 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.679552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.679566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.679638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.679650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.679741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.679753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.679843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.679856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.679949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.679961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.680912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.680925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.681959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.681971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.682038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.682050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.682137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.682151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.682232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.682245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.682316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.682328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.682417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.682431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.682501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.682514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.509 qpair failed and we were unable to recover it. 00:38:44.509 [2024-07-12 11:44:30.682591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.509 [2024-07-12 11:44:30.682603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.682739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.682751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.682822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.682834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.682969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.682982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.683922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.683936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.684899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.684912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.685983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.685996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.510 [2024-07-12 11:44:30.686060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.510 [2024-07-12 11:44:30.686073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.510 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.686222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.686307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.686395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.686492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.686644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.686746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.686832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.686988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.687961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.687974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.688973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.688985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.689130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.689143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.689214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.689227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.689364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.689390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.689537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.689550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.689632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.689646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.689814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.689826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.511 [2024-07-12 11:44:30.689908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.511 [2024-07-12 11:44:30.689920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.511 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.690909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.690993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.691860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.691994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.692007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.692091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.692103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.692172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.692185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.692259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.692271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.692351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.692363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.512 [2024-07-12 11:44:30.692512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.512 [2024-07-12 11:44:30.692526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.512 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.692601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.692613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.692693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.692706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.692773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.692785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.692855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.692867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.692941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.692955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.693974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.693986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.694937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.694951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.695013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.695025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.695108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.695123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.695221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.695239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.695318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.695332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.513 [2024-07-12 11:44:30.695407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.513 [2024-07-12 11:44:30.695420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.513 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.695485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.695498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.695634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.695648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.695714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.695727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.695788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.695800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.695868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.695881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.695947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.695959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.696949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.696962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.697928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.697941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.698022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.698035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.698116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.698128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.698214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.698226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.698363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.698382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.698452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.698465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.698544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.698557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.514 [2024-07-12 11:44:30.698638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.514 [2024-07-12 11:44:30.698651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.514 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.698734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.698747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.698902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.698915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.699940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.699953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.700976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.700989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.701094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.701193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.701280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.701428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.701536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.701618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.515 [2024-07-12 11:44:30.701720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.515 qpair failed and we were unable to recover it. 00:38:44.515 [2024-07-12 11:44:30.701814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.701828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.701905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.701922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.701994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.702961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.702973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.703959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.703971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.516 [2024-07-12 11:44:30.704946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.516 [2024-07-12 11:44:30.704959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.516 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.705923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.705937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.706911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.706923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.707974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.707987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.708067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.708080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.708218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.708231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.708387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.708401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.517 qpair failed and we were unable to recover it. 00:38:44.517 [2024-07-12 11:44:30.708465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.517 [2024-07-12 11:44:30.708478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.708551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.708563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.708644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.708656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.708738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.708755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 Malloc0 00:38:44.518 [2024-07-12 11:44:30.708889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.708901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.708975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.708988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.709058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.709215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.709296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.709389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.518 [2024-07-12 11:44:30.709553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.709639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.709730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:44.518 [2024-07-12 11:44:30.709882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.709896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.709991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.710096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.518 [2024-07-12 11:44:30.710198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.710307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.710396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:44.518 [2024-07-12 11:44:30.710487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.710584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.710687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.518 [2024-07-12 11:44:30.710699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.518 qpair failed and we were unable to recover it. 00:38:44.518 [2024-07-12 11:44:30.710769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.710782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.710856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.710869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.710939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.710951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.711887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.711899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.712948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.712964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.519 qpair failed and we were unable to recover it. 00:38:44.519 [2024-07-12 11:44:30.713790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.519 [2024-07-12 11:44:30.713803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.713874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.713886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.714924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.714936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.715962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.715974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:44.520 [2024-07-12 11:44:30.716290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.716924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.520 [2024-07-12 11:44:30.716936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.520 qpair failed and we were unable to recover it. 00:38:44.520 [2024-07-12 11:44:30.717014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.717918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.717931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.718840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.718853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.719974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.719989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.720062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.521 [2024-07-12 11:44:30.720074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.521 qpair failed and we were unable to recover it. 00:38:44.521 [2024-07-12 11:44:30.720139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.720955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.720968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.721908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.721926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.722983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.722996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.723152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.723165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.522 qpair failed and we were unable to recover it. 00:38:44.522 [2024-07-12 11:44:30.723329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.522 [2024-07-12 11:44:30.723342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.723480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.723493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.723582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.723594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.723752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.723765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.723859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.723872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.723950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.723963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.724838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.523 [2024-07-12 11:44:30.724935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.724949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.725105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.725118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.725204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.725217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:44.523 [2024-07-12 11:44:30.725359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.725372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.725448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.725460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.725550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.523 [2024-07-12 11:44:30.725565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.725708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.725720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.725791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.725806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:44.523 [2024-07-12 11:44:30.725946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.725959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.726031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.726044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.726186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.726200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.726278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.726290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.726375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.726394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.523 [2024-07-12 11:44:30.726480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.523 [2024-07-12 11:44:30.726493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.523 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.726577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.726590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.726659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.726672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.726809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.726822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.727934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.727947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.524 [2024-07-12 11:44:30.728869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.524 [2024-07-12 11:44:30.728887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.524 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.728979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.728992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.729862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.729993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.730136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.730242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.730342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.730542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.730648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.730779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.730952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.731058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.731077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.731169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.731187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.731268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.731286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.731441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.731460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.731628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.731650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.731813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.731831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.731917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.731935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.732075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.732093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.732192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.732211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.732367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.732392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.732563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.732582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.732671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.732690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.732769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.732787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.525 qpair failed and we were unable to recover it. 00:38:44.525 [2024-07-12 11:44:30.732878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.525 [2024-07-12 11:44:30.732896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.733958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.733970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.734976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.734995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.735071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.735089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.735282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.735301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.735374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.735397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.735527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.735546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.735702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.735726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.735879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.735897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.736050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.736068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.736215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.736233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.736396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.736415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.736496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.736514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.736616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 [2024-07-12 11:44:30.736634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.526 qpair failed and we were unable to recover it. 00:38:44.526 [2024-07-12 11:44:30.736880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.526 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.526 [2024-07-12 11:44:30.736901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.737140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.737158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:44.527 [2024-07-12 11:44:30.737247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.737265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.737357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.737375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.527 [2024-07-12 11:44:30.737600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.737618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.737708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.737726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:44.527 [2024-07-12 11:44:30.737818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.737837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.737926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.737943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.738107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.738125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.738224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.738242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.738408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.738427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.738536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.738554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.738641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.738659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.738752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.738770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.738860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.738878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.739024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.739042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.739209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.739227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.739370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.739392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.739475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.739493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.739650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.739668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.739857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.739876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.739966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.739984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.740142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.740160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.740254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.740272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.740455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.740474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.740617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.740637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.740727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.740745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.740893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.740911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.741075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.527 [2024-07-12 11:44:30.741094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.527 qpair failed and we were unable to recover it. 00:38:44.527 [2024-07-12 11:44:30.741248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.741266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.741423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.741442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.741533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.741550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.741653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.741672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.741764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.741782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.741876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.741894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.742072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.742091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.742254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.742272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.742359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.742383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.742602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.742620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.742704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.742722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.742892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.742910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.743000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.743017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.743178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.743200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.743365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.743389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.743497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.743516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.743618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.743636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.743789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.743807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.743955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.743972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.744067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.744085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.744229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.744247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.744344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.744362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.744566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.744582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.744685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.744709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.744880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.744898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.528 [2024-07-12 11:44:30.744996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.745014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.745200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.745220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.528 [2024-07-12 11:44:30.745406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.745426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 [2024-07-12 11:44:30.745535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.745554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.528 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.528 [2024-07-12 11:44:30.745785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.528 [2024-07-12 11:44:30.745805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.528 qpair failed and we were unable to recover it. 00:38:44.529 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:44.529 [2024-07-12 11:44:30.745895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.745913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.746974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.746992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.747148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.747165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.747264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.747282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.747435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.747454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.747620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.747638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.747802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.747820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.747918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.747936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.748021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.748039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.748188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.748205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.748352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:44.529 [2024-07-12 11:44:30.748370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032d780 with addr=10.0.0.2, port=4420 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 [2024-07-12 11:44:30.748501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.529 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.529 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:44.529 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.529 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:44.529 [2024-07-12 11:44:30.757784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.529 [2024-07-12 11:44:30.757899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.529 [2024-07-12 11:44:30.757931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.529 [2024-07-12 11:44:30.757950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.529 [2024-07-12 11:44:30.757964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.529 [2024-07-12 11:44:30.758005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.529 qpair failed and we were unable to recover it. 00:38:44.529 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.529 11:44:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1181568 00:38:44.529 [2024-07-12 11:44:30.767705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.529 [2024-07-12 11:44:30.767791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.529 [2024-07-12 11:44:30.767813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.530 [2024-07-12 11:44:30.767825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.530 [2024-07-12 11:44:30.767833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.530 [2024-07-12 11:44:30.767855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.530 qpair failed and we were unable to recover it. 00:38:44.530 [2024-07-12 11:44:30.777684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.530 [2024-07-12 11:44:30.777764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.530 [2024-07-12 11:44:30.777787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.530 [2024-07-12 11:44:30.777799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.530 [2024-07-12 11:44:30.777809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.530 [2024-07-12 11:44:30.777832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.530 qpair failed and we were unable to recover it. 00:38:44.791 [2024-07-12 11:44:30.787714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.791 [2024-07-12 11:44:30.787805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.791 [2024-07-12 11:44:30.787828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.791 [2024-07-12 11:44:30.787841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.791 [2024-07-12 11:44:30.787851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.791 [2024-07-12 11:44:30.787872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.791 qpair failed and we were unable to recover it. 00:38:44.791 [2024-07-12 11:44:30.797680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.791 [2024-07-12 11:44:30.797767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.791 [2024-07-12 11:44:30.797790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.791 [2024-07-12 11:44:30.797801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.791 [2024-07-12 11:44:30.797810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.791 [2024-07-12 11:44:30.797831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.791 qpair failed and we were unable to recover it. 00:38:44.791 [2024-07-12 11:44:30.807842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.791 [2024-07-12 11:44:30.807960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.791 [2024-07-12 11:44:30.807983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.791 [2024-07-12 11:44:30.807994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.791 [2024-07-12 11:44:30.808003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.791 [2024-07-12 11:44:30.808024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.817854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.817930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.817952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.817964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.817974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.817996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.827790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.827879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.827901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.827913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.827922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.827945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.837821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.837896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.837918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.837930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.837939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.837960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.847953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.848056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.848078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.848089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.848098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.848120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.857986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.858082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.858104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.858115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.858124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.858146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.867857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.867947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.867969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.867980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.867989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.868010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.877925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.878002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.878027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.878038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.878048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.878069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.888000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.888080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.888102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.888114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.888123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.888145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.898064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.898137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.898159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.898170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.898179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.898200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.907963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.908039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.908061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.908072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.908081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.908103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.918038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.918135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.918156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.918167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.918177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.918201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.928131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.928209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.928231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.928242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.928251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.928274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.938153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.938232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.938253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.938265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.938274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.938295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.948167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.948242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.792 [2024-07-12 11:44:30.948264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.792 [2024-07-12 11:44:30.948275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.792 [2024-07-12 11:44:30.948284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.792 [2024-07-12 11:44:30.948305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.792 qpair failed and we were unable to recover it. 00:38:44.792 [2024-07-12 11:44:30.958120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.792 [2024-07-12 11:44:30.958194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:30.958216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:30.958227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:30.958236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:30.958257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:30.968231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:30.968307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:30.968331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:30.968347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:30.968356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:30.968383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:30.978183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:30.978263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:30.978285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:30.978296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:30.978305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:30.978327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:30.988345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:30.988426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:30.988449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:30.988460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:30.988469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:30.988491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:30.998313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:30.998398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:30.998420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:30.998431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:30.998440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:30.998462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.008360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.008442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.008464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.008475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.008487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.008509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.018424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.018507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.018529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.018540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.018549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.018570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.028459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.028535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.028558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.028569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.028578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.028600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.038349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.038429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.038451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.038463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.038473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.038496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.048524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.048609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.048633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.048645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.048655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.048677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.058513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.058597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.058618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.058629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.058639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.058661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.068482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.068561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.068583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.068594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.068604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.068624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.078534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.078615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.078637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.078648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.078658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.078680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.088653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.088728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.088750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.088761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.793 [2024-07-12 11:44:31.088771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.793 [2024-07-12 11:44:31.088792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.793 qpair failed and we were unable to recover it. 00:38:44.793 [2024-07-12 11:44:31.098649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.793 [2024-07-12 11:44:31.098725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.793 [2024-07-12 11:44:31.098747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.793 [2024-07-12 11:44:31.098757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.794 [2024-07-12 11:44:31.098770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.794 [2024-07-12 11:44:31.098792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.794 qpair failed and we were unable to recover it. 00:38:44.794 [2024-07-12 11:44:31.108622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.794 [2024-07-12 11:44:31.108698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.794 [2024-07-12 11:44:31.108720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.794 [2024-07-12 11:44:31.108731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.794 [2024-07-12 11:44:31.108741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.794 [2024-07-12 11:44:31.108762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.794 qpair failed and we were unable to recover it. 00:38:44.794 [2024-07-12 11:44:31.118608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.794 [2024-07-12 11:44:31.118682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.794 [2024-07-12 11:44:31.118704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.794 [2024-07-12 11:44:31.118715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.794 [2024-07-12 11:44:31.118724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.794 [2024-07-12 11:44:31.118745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.794 qpair failed and we were unable to recover it. 00:38:44.794 [2024-07-12 11:44:31.128678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.794 [2024-07-12 11:44:31.128756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.794 [2024-07-12 11:44:31.128778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.794 [2024-07-12 11:44:31.128789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.794 [2024-07-12 11:44:31.128798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.794 [2024-07-12 11:44:31.128819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.794 qpair failed and we were unable to recover it. 00:38:44.794 [2024-07-12 11:44:31.138698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:44.794 [2024-07-12 11:44:31.138772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:44.794 [2024-07-12 11:44:31.138794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:44.794 [2024-07-12 11:44:31.138804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:44.794 [2024-07-12 11:44:31.138814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:44.794 [2024-07-12 11:44:31.138835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:44.794 qpair failed and we were unable to recover it. 00:38:45.055 [2024-07-12 11:44:31.148750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.055 [2024-07-12 11:44:31.148847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.055 [2024-07-12 11:44:31.148869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.055 [2024-07-12 11:44:31.148879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.055 [2024-07-12 11:44:31.148889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.055 [2024-07-12 11:44:31.148910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.055 qpair failed and we were unable to recover it. 00:38:45.055 [2024-07-12 11:44:31.158760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.055 [2024-07-12 11:44:31.158849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.055 [2024-07-12 11:44:31.158871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.055 [2024-07-12 11:44:31.158882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.055 [2024-07-12 11:44:31.158892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.055 [2024-07-12 11:44:31.158917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.055 qpair failed and we were unable to recover it. 00:38:45.055 [2024-07-12 11:44:31.168796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.055 [2024-07-12 11:44:31.168866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.055 [2024-07-12 11:44:31.168888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.055 [2024-07-12 11:44:31.168899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.055 [2024-07-12 11:44:31.168908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.055 [2024-07-12 11:44:31.168930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.055 qpair failed and we were unable to recover it. 00:38:45.055 [2024-07-12 11:44:31.178869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.055 [2024-07-12 11:44:31.178942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.055 [2024-07-12 11:44:31.178964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.055 [2024-07-12 11:44:31.178975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.055 [2024-07-12 11:44:31.178984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.055 [2024-07-12 11:44:31.179007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.055 qpair failed and we were unable to recover it. 00:38:45.055 [2024-07-12 11:44:31.188855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.055 [2024-07-12 11:44:31.188934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.055 [2024-07-12 11:44:31.188955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.055 [2024-07-12 11:44:31.188970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.055 [2024-07-12 11:44:31.188979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.055 [2024-07-12 11:44:31.189001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.055 qpair failed and we were unable to recover it. 00:38:45.055 [2024-07-12 11:44:31.198875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.055 [2024-07-12 11:44:31.198954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.055 [2024-07-12 11:44:31.198975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.055 [2024-07-12 11:44:31.198986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.055 [2024-07-12 11:44:31.198996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.199018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.208934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.209015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.209037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.209048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.209058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.209080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.218928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.219005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.219027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.219038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.219047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.219068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.228996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.229076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.229097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.229109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.229118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.229140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.238985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.239061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.239083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.239093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.239103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.239124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.249070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.249147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.249169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.249180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.249189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.249211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.259133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.259221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.259243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.259254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.259265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.259285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.269077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.269153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.269175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.269185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.269194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.269215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.279112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.279191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.279216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.279229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.279238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.279261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.289184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.289267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.289289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.289300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.289310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.289330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.299252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.299323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.299345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.299356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.299365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.299392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.309237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.309326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.309349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.309360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.309370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.309398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.319257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.319360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.319390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.319401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.319411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.319435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.329268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.329345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.329367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.329382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.329392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.329414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.339262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.056 [2024-07-12 11:44:31.339341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.056 [2024-07-12 11:44:31.339362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.056 [2024-07-12 11:44:31.339373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.056 [2024-07-12 11:44:31.339391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.056 [2024-07-12 11:44:31.339412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.056 qpair failed and we were unable to recover it. 00:38:45.056 [2024-07-12 11:44:31.349312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.057 [2024-07-12 11:44:31.349391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.057 [2024-07-12 11:44:31.349412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.057 [2024-07-12 11:44:31.349423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.057 [2024-07-12 11:44:31.349431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.057 [2024-07-12 11:44:31.349452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.057 qpair failed and we were unable to recover it. 00:38:45.057 [2024-07-12 11:44:31.359314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.057 [2024-07-12 11:44:31.359394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.057 [2024-07-12 11:44:31.359415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.057 [2024-07-12 11:44:31.359426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.057 [2024-07-12 11:44:31.359434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.057 [2024-07-12 11:44:31.359455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.057 qpair failed and we were unable to recover it. 00:38:45.057 [2024-07-12 11:44:31.369440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.057 [2024-07-12 11:44:31.369519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.057 [2024-07-12 11:44:31.369543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.057 [2024-07-12 11:44:31.369554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.057 [2024-07-12 11:44:31.369562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.057 [2024-07-12 11:44:31.369584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.057 qpair failed and we were unable to recover it. 00:38:45.057 [2024-07-12 11:44:31.379473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.057 [2024-07-12 11:44:31.379563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.057 [2024-07-12 11:44:31.379584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.057 [2024-07-12 11:44:31.379594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.057 [2024-07-12 11:44:31.379602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.057 [2024-07-12 11:44:31.379623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.057 qpair failed and we were unable to recover it. 00:38:45.057 [2024-07-12 11:44:31.389467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.057 [2024-07-12 11:44:31.389546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.057 [2024-07-12 11:44:31.389567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.057 [2024-07-12 11:44:31.389577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.057 [2024-07-12 11:44:31.389585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.057 [2024-07-12 11:44:31.389609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.057 qpair failed and we were unable to recover it. 00:38:45.057 [2024-07-12 11:44:31.399440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.057 [2024-07-12 11:44:31.399516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.057 [2024-07-12 11:44:31.399537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.057 [2024-07-12 11:44:31.399547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.057 [2024-07-12 11:44:31.399556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.057 [2024-07-12 11:44:31.399576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.057 qpair failed and we were unable to recover it. 00:38:45.057 [2024-07-12 11:44:31.409576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.057 [2024-07-12 11:44:31.409655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.057 [2024-07-12 11:44:31.409676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.057 [2024-07-12 11:44:31.409687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.057 [2024-07-12 11:44:31.409695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.057 [2024-07-12 11:44:31.409719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.057 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.419542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.419615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.419636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.419647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.419656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.419676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.429513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.429626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.429648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.429659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.429667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.429688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.439579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.439681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.439702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.439713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.439721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.439743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.449653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.449734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.449754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.449765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.449773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.449794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.459645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.459726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.459747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.459757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.459766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.459787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.469686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.469784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.469805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.469816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.469825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.469845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.479656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.479760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.479781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.479792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.479806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.479827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.489758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.489829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.489850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.489861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.489869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.489890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.499738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.499814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.499835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.318 [2024-07-12 11:44:31.499845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.318 [2024-07-12 11:44:31.499857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.318 [2024-07-12 11:44:31.499877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.318 qpair failed and we were unable to recover it. 00:38:45.318 [2024-07-12 11:44:31.509803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.318 [2024-07-12 11:44:31.509887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.318 [2024-07-12 11:44:31.509908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.509918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.509926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.509947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.519768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.519843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.519864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.519874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.519882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.519902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.529917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.529992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.530013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.530023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.530032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.530053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.539905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.539982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.540003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.540013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.540022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.540042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.549909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.549985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.550005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.550016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.550024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.550044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.559940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.560026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.560047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.560057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.560066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.560087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.569942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.570022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.570042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.570052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.570060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.570081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.579893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.579972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.579992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.580003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.580011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.580032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.590046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.590121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.590141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.590155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.590164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.590184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.600098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.600174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.600195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.600205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.600213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.600233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.610064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.610134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.610154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.610164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.610172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.610193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.620096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.620171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.620191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.620202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.620210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.620247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.630294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.630370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.630396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.630407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.630415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.630435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.640157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.640232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.640253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.640264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.319 [2024-07-12 11:44:31.640272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.319 [2024-07-12 11:44:31.640292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.319 qpair failed and we were unable to recover it. 00:38:45.319 [2024-07-12 11:44:31.650223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.319 [2024-07-12 11:44:31.650296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.319 [2024-07-12 11:44:31.650317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.319 [2024-07-12 11:44:31.650328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.320 [2024-07-12 11:44:31.650337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.320 [2024-07-12 11:44:31.650358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.320 qpair failed and we were unable to recover it. 00:38:45.320 [2024-07-12 11:44:31.660226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.320 [2024-07-12 11:44:31.660298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.320 [2024-07-12 11:44:31.660319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.320 [2024-07-12 11:44:31.660330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.320 [2024-07-12 11:44:31.660339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.320 [2024-07-12 11:44:31.660359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.320 qpair failed and we were unable to recover it. 00:38:45.320 [2024-07-12 11:44:31.670255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.320 [2024-07-12 11:44:31.670326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.320 [2024-07-12 11:44:31.670347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.320 [2024-07-12 11:44:31.670358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.320 [2024-07-12 11:44:31.670366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.320 [2024-07-12 11:44:31.670391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.320 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.680230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.680313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.680333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.680347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.680355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.680375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.690460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.690535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.690556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.690567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.690576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.690596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.700264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.700340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.700360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.700371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.700466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.700488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.710364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.710446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.710467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.710477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.710485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.710506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.720332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.720413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.720436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.720447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.720456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.720477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.730442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.730526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.730546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.730557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.730566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.730587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.740375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.740448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.740473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.740484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.740492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.740513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.750488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.580 [2024-07-12 11:44:31.750575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.580 [2024-07-12 11:44:31.750595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.580 [2024-07-12 11:44:31.750605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.580 [2024-07-12 11:44:31.750613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.580 [2024-07-12 11:44:31.750634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.580 qpair failed and we were unable to recover it. 00:38:45.580 [2024-07-12 11:44:31.760487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.760568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.760589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.760599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.760607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.760627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.770598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.770675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.770698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.770709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.770717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.770738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.780501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.780575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.780595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.780606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.780614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.780634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.790646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.790720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.790741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.790751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.790759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.790781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.800617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.800690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.800710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.800721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.800729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.800750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.810714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.810788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.810809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.810819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.810827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.810850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.820692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.820962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.820984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.820994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.821004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.821024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.830734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.830843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.830865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.830876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.830884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.830905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.840727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.840802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.840822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.840833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.840842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.840862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.850902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.850976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.850997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.851008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.851015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.851040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.860772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.860843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.860866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.860877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.860885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.860906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.870834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.870911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.870931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.870942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.870950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.870970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.880872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.880983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.881005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.881015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.881024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.881044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.890896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.890976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.581 [2024-07-12 11:44:31.890997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.581 [2024-07-12 11:44:31.891007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.581 [2024-07-12 11:44:31.891015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.581 [2024-07-12 11:44:31.891036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.581 qpair failed and we were unable to recover it. 00:38:45.581 [2024-07-12 11:44:31.900940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.581 [2024-07-12 11:44:31.901017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.582 [2024-07-12 11:44:31.901037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.582 [2024-07-12 11:44:31.901048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.582 [2024-07-12 11:44:31.901059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.582 [2024-07-12 11:44:31.901080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.582 qpair failed and we were unable to recover it. 00:38:45.582 [2024-07-12 11:44:31.910949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.582 [2024-07-12 11:44:31.911023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.582 [2024-07-12 11:44:31.911044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.582 [2024-07-12 11:44:31.911054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.582 [2024-07-12 11:44:31.911062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.582 [2024-07-12 11:44:31.911083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.582 qpair failed and we were unable to recover it. 00:38:45.582 [2024-07-12 11:44:31.920985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.582 [2024-07-12 11:44:31.921057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.582 [2024-07-12 11:44:31.921077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.582 [2024-07-12 11:44:31.921087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.582 [2024-07-12 11:44:31.921096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.582 [2024-07-12 11:44:31.921116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.582 qpair failed and we were unable to recover it. 00:38:45.582 [2024-07-12 11:44:31.931055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.582 [2024-07-12 11:44:31.931145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.582 [2024-07-12 11:44:31.931166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.582 [2024-07-12 11:44:31.931176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.582 [2024-07-12 11:44:31.931185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.582 [2024-07-12 11:44:31.931206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.582 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:31.941014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:31.941122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:31.941143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:31.941153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:31.941162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:31.941183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.843 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:31.951052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:31.951130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:31.951150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:31.951160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:31.951169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:31.951189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.843 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:31.961064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:31.961165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:31.961185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:31.961196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:31.961204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:31.961224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.843 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:31.971111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:31.971193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:31.971213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:31.971224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:31.971232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:31.971253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.843 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:31.981162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:31.981235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:31.981256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:31.981266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:31.981274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:31.981295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.843 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:31.991156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:31.991246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:31.991267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:31.991281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:31.991289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:31.991317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.843 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:32.001150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:32.001246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:32.001267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:32.001277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:32.001286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:32.001306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.843 qpair failed and we were unable to recover it. 00:38:45.843 [2024-07-12 11:44:32.011191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.843 [2024-07-12 11:44:32.011263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.843 [2024-07-12 11:44:32.011284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.843 [2024-07-12 11:44:32.011294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.843 [2024-07-12 11:44:32.011302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.843 [2024-07-12 11:44:32.011323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.021203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.021285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.021306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.021317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.021325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.021345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.031272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.031371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.031399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.031409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.031418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.031438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.041341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.041567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.041591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.041602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.041610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.041633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.051334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.051411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.051433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.051444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.051453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.051475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.061411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.061487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.061509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.061519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.061528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.061549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.071416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.071527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.071550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.071561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.071569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.071591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.081488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.081609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.081631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.081645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.081654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.081678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.091483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.091559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.091581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.091591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.091600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.091621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.101371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.101454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.101475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.101486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.101495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.101516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.111541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.111619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.111640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.111651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.111659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.111680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.121372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.121452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.121473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.121484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.121493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.121514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.131644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.131722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.131743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.131754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.131762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.131782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.141480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.141555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.141575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.141586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.141594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.141614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.151652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.151752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.151773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.151783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.151792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.151812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.161583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.161660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.161681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.844 [2024-07-12 11:44:32.161691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.844 [2024-07-12 11:44:32.161699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.844 [2024-07-12 11:44:32.161719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.844 qpair failed and we were unable to recover it. 00:38:45.844 [2024-07-12 11:44:32.171758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.844 [2024-07-12 11:44:32.171834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.844 [2024-07-12 11:44:32.171858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.845 [2024-07-12 11:44:32.171868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.845 [2024-07-12 11:44:32.171876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.845 [2024-07-12 11:44:32.171897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.845 qpair failed and we were unable to recover it. 00:38:45.845 [2024-07-12 11:44:32.181626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.845 [2024-07-12 11:44:32.181731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.845 [2024-07-12 11:44:32.181752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.845 [2024-07-12 11:44:32.181763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.845 [2024-07-12 11:44:32.181771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.845 [2024-07-12 11:44:32.181791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.845 qpair failed and we were unable to recover it. 00:38:45.845 [2024-07-12 11:44:32.191784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:45.845 [2024-07-12 11:44:32.191861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:45.845 [2024-07-12 11:44:32.191882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:45.845 [2024-07-12 11:44:32.191893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:45.845 [2024-07-12 11:44:32.191901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:45.845 [2024-07-12 11:44:32.191921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:45.845 qpair failed and we were unable to recover it. 00:38:46.105 [2024-07-12 11:44:32.201669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.105 [2024-07-12 11:44:32.201762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.201782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.201793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.201802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.201823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.211813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.211890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.211910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.211921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.211929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.211953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.221712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.221789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.221809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.221820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.221828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.221849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.231879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.231951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.231971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.231981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.231990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.232010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.241857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.241930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.241951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.241962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.241970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.241991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.251951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.252022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.252043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.252059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.252068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.252088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.261803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.261874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.261897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.261908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.261917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.261937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.271977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.272048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.272069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.272079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.272088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.272109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.281969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.282051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.282071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.282081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.282090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.282110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.292020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.292093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.292114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.292125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.292133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.292154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.302024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.302129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.302154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.302165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.302176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.302197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.312092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.312178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.312199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.312210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.312219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.312243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.322135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.322212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.322233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.322244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.322253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.322273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.332121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.332198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.332219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.332229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.332238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.106 [2024-07-12 11:44:32.332259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.106 qpair failed and we were unable to recover it. 00:38:46.106 [2024-07-12 11:44:32.342051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.106 [2024-07-12 11:44:32.342124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.106 [2024-07-12 11:44:32.342144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.106 [2024-07-12 11:44:32.342156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.106 [2024-07-12 11:44:32.342164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.342185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.352267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.352345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.352367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.352382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.352392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.352414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.362211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.362285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.362305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.362316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.362325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.362346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.372225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.372295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.372316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.372327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.372335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.372356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.382287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.382361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.382389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.382400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.382408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.382429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.392369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.392446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.392467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.392478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.392489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.392510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.402368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.402447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.402468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.402478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.402487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.402508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.412424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.412501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.412522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.412533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.412541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.412561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.422406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.422476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.422496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.422508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.422516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.422536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.432410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.432489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.432509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.432520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.432528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.432548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.442478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.442554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.442575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.442586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.442595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.442616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.107 [2024-07-12 11:44:32.452616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.107 [2024-07-12 11:44:32.452686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.107 [2024-07-12 11:44:32.452707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.107 [2024-07-12 11:44:32.452717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.107 [2024-07-12 11:44:32.452725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.107 [2024-07-12 11:44:32.452745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.107 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.462659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.462734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.462755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.462767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.462776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.462796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.472583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.472660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.472680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.472691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.472699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.472719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.482525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.482610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.482631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.482644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.482652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.482673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.492657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.492730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.492750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.492761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.492770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.492789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.502637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.502708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.502729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.502740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.502749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.502769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.512629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.512706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.512726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.512737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.512745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.512766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.522618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.522698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.522718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.522729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.522738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.522758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.368 [2024-07-12 11:44:32.532812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.368 [2024-07-12 11:44:32.532920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.368 [2024-07-12 11:44:32.532946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.368 [2024-07-12 11:44:32.532957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.368 [2024-07-12 11:44:32.532966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.368 [2024-07-12 11:44:32.532987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.368 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.542785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.542862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.542883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.542894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.542903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.542927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.552831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.552908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.552929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.552940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.552948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.552968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.562712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.562787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.562807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.562818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.562826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.562847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.572807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.572884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.572907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.572917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.572926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.572946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.582902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.582988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.583009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.583020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.583028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.583048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.592951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.593024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.593045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.593056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.593064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.593084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.602891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.602969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.602990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.603001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.603009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.603030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.612989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.613070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.613091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.613102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.613110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.613133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.622949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.623022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.623042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.623053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.623061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.623081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.633032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.633130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.633150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.633161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.633169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.633189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.643055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.643127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.643147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.643158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.643166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.643187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.653094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.653165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.653185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.653196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.653204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.653225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.663113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.663201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.663227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.663238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.663246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.663268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.673164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.673242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.673262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.369 [2024-07-12 11:44:32.673273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.369 [2024-07-12 11:44:32.673281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.369 [2024-07-12 11:44:32.673301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.369 qpair failed and we were unable to recover it. 00:38:46.369 [2024-07-12 11:44:32.683129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.369 [2024-07-12 11:44:32.683208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.369 [2024-07-12 11:44:32.683229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.370 [2024-07-12 11:44:32.683239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.370 [2024-07-12 11:44:32.683247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.370 [2024-07-12 11:44:32.683268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.370 qpair failed and we were unable to recover it. 00:38:46.370 [2024-07-12 11:44:32.693296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.370 [2024-07-12 11:44:32.693401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.370 [2024-07-12 11:44:32.693422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.370 [2024-07-12 11:44:32.693433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.370 [2024-07-12 11:44:32.693441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.370 [2024-07-12 11:44:32.693462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.370 qpair failed and we were unable to recover it. 00:38:46.370 [2024-07-12 11:44:32.703286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.370 [2024-07-12 11:44:32.703363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.370 [2024-07-12 11:44:32.703391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.370 [2024-07-12 11:44:32.703402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.370 [2024-07-12 11:44:32.703413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.370 [2024-07-12 11:44:32.703434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.370 qpair failed and we were unable to recover it. 00:38:46.370 [2024-07-12 11:44:32.713225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.370 [2024-07-12 11:44:32.713302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.370 [2024-07-12 11:44:32.713323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.370 [2024-07-12 11:44:32.713334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.370 [2024-07-12 11:44:32.713342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.370 [2024-07-12 11:44:32.713363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.370 qpair failed and we were unable to recover it. 00:38:46.370 [2024-07-12 11:44:32.723216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.370 [2024-07-12 11:44:32.723291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.370 [2024-07-12 11:44:32.723313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.370 [2024-07-12 11:44:32.723324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.370 [2024-07-12 11:44:32.723332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.370 [2024-07-12 11:44:32.723352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.370 qpair failed and we were unable to recover it. 00:38:46.630 [2024-07-12 11:44:32.733354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.630 [2024-07-12 11:44:32.733431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.630 [2024-07-12 11:44:32.733452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.630 [2024-07-12 11:44:32.733464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.630 [2024-07-12 11:44:32.733472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.630 [2024-07-12 11:44:32.733493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.630 qpair failed and we were unable to recover it. 00:38:46.630 [2024-07-12 11:44:32.743341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.630 [2024-07-12 11:44:32.743421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.630 [2024-07-12 11:44:32.743443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.630 [2024-07-12 11:44:32.743453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.630 [2024-07-12 11:44:32.743462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.630 [2024-07-12 11:44:32.743482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.630 qpair failed and we were unable to recover it. 00:38:46.630 [2024-07-12 11:44:32.753411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.630 [2024-07-12 11:44:32.753491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.630 [2024-07-12 11:44:32.753512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.630 [2024-07-12 11:44:32.753522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.630 [2024-07-12 11:44:32.753530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.630 [2024-07-12 11:44:32.753551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.630 qpair failed and we were unable to recover it. 00:38:46.630 [2024-07-12 11:44:32.763427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.630 [2024-07-12 11:44:32.763503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.630 [2024-07-12 11:44:32.763524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.630 [2024-07-12 11:44:32.763534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.630 [2024-07-12 11:44:32.763548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.630 [2024-07-12 11:44:32.763569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.630 qpair failed and we were unable to recover it. 00:38:46.630 [2024-07-12 11:44:32.773447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.630 [2024-07-12 11:44:32.773531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.630 [2024-07-12 11:44:32.773552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.630 [2024-07-12 11:44:32.773563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.630 [2024-07-12 11:44:32.773571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.630 [2024-07-12 11:44:32.773595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.630 qpair failed and we were unable to recover it. 00:38:46.630 [2024-07-12 11:44:32.783426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.630 [2024-07-12 11:44:32.783501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.630 [2024-07-12 11:44:32.783523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.630 [2024-07-12 11:44:32.783534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.630 [2024-07-12 11:44:32.783542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.630 [2024-07-12 11:44:32.783563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.630 qpair failed and we were unable to recover it. 00:38:46.630 [2024-07-12 11:44:32.793479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.630 [2024-07-12 11:44:32.793601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.630 [2024-07-12 11:44:32.793623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.630 [2024-07-12 11:44:32.793634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.630 [2024-07-12 11:44:32.793645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.630 [2024-07-12 11:44:32.793666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.803503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.803575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.803596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.803606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.803614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.803635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.813649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.813757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.813779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.813790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.813798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.813818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.823570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.823640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.823661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.823671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.823679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.823699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.833652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.833762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.833784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.833795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.833803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.833823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.843582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.843708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.843728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.843739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.843747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.843767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.853717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.853798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.853819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.853829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.853837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.853857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.863653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.863746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.863767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.863777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.863786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.863807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.873722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.873823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.873843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.873853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.873862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.873882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.883862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.883939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.883960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.883973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.883982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.884002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.893847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.893917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.893938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.893949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.893957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.893977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.903820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.903892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.903912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.903923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.903931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.903951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.913901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.914029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.914051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.914063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.914071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.914092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.923923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.923999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.924019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.924029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.924037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.924058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.933905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.933978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.933999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.934009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.934018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.934039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.943876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.943960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.943981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.943991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.943999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.944020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.953967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.954044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.954064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.954075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.954083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.954103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.964020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.964097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.964118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.964129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.964137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.964158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.631 [2024-07-12 11:44:32.974045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.631 [2024-07-12 11:44:32.974131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.631 [2024-07-12 11:44:32.974155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.631 [2024-07-12 11:44:32.974165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.631 [2024-07-12 11:44:32.974174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.631 [2024-07-12 11:44:32.974194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.631 qpair failed and we were unable to recover it. 00:38:46.632 [2024-07-12 11:44:32.984037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.632 [2024-07-12 11:44:32.984156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.632 [2024-07-12 11:44:32.984178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.632 [2024-07-12 11:44:32.984189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.632 [2024-07-12 11:44:32.984197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.632 [2024-07-12 11:44:32.984218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.632 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:32.994056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:32.994183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:32.994205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:32.994216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:32.994225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:32.994246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.004150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:33.004235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:33.004255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:33.004266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:33.004275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:33.004298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.014203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:33.014277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:33.014299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:33.014309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:33.014318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:33.014342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.024095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:33.024183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:33.024208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:33.024220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:33.024228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:33.024249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.034194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:33.034304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:33.034329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:33.034339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:33.034348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:33.034370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.044291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:33.044393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:33.044415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:33.044426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:33.044434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:33.044454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.054291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:33.054363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:33.054394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:33.054405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:33.054414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:33.054435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.064352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.892 [2024-07-12 11:44:33.064576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.892 [2024-07-12 11:44:33.064602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.892 [2024-07-12 11:44:33.064613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.892 [2024-07-12 11:44:33.064621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.892 [2024-07-12 11:44:33.064644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.892 qpair failed and we were unable to recover it. 00:38:46.892 [2024-07-12 11:44:33.074316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.074404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.074425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.074436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.074444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.074465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.084403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.084477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.084498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.084508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.084517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.084537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.094400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.094503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.094523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.094534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.094542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.094563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.104387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.104485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.104506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.104517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.104525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.104549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.114440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.114520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.114540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.114551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.114559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.114580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.124496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.124576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.124597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.124607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.124615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.124636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.134472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.134591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.134614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.134625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.134633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.134654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.144486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.144566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.144587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.144597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.144605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.144626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.154593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.154676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.154696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.154707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.154715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.154736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.164571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.164656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.164677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.164687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.164696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.164716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.174579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.893 [2024-07-12 11:44:33.174655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.893 [2024-07-12 11:44:33.174676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.893 [2024-07-12 11:44:33.174686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.893 [2024-07-12 11:44:33.174694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.893 [2024-07-12 11:44:33.174715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.893 qpair failed and we were unable to recover it. 00:38:46.893 [2024-07-12 11:44:33.184651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.894 [2024-07-12 11:44:33.184722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.894 [2024-07-12 11:44:33.184742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.894 [2024-07-12 11:44:33.184753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.894 [2024-07-12 11:44:33.184761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.894 [2024-07-12 11:44:33.184781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.894 qpair failed and we were unable to recover it. 00:38:46.894 [2024-07-12 11:44:33.194618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.894 [2024-07-12 11:44:33.194693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.894 [2024-07-12 11:44:33.194714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.894 [2024-07-12 11:44:33.194725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.894 [2024-07-12 11:44:33.194737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.894 [2024-07-12 11:44:33.194757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.894 qpair failed and we were unable to recover it. 00:38:46.894 [2024-07-12 11:44:33.204659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.894 [2024-07-12 11:44:33.204782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.894 [2024-07-12 11:44:33.204803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.894 [2024-07-12 11:44:33.204815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.894 [2024-07-12 11:44:33.204823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.894 [2024-07-12 11:44:33.204844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.894 qpair failed and we were unable to recover it. 00:38:46.894 [2024-07-12 11:44:33.214697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.894 [2024-07-12 11:44:33.214775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.894 [2024-07-12 11:44:33.214795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.894 [2024-07-12 11:44:33.214806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.894 [2024-07-12 11:44:33.214814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.894 [2024-07-12 11:44:33.214834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.894 qpair failed and we were unable to recover it. 00:38:46.894 [2024-07-12 11:44:33.224784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.894 [2024-07-12 11:44:33.224890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.894 [2024-07-12 11:44:33.224914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.894 [2024-07-12 11:44:33.224925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.894 [2024-07-12 11:44:33.224933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.894 [2024-07-12 11:44:33.224954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.894 qpair failed and we were unable to recover it. 00:38:46.894 [2024-07-12 11:44:33.234768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.894 [2024-07-12 11:44:33.234857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.894 [2024-07-12 11:44:33.234877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.894 [2024-07-12 11:44:33.234888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.894 [2024-07-12 11:44:33.234896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.894 [2024-07-12 11:44:33.234920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.894 qpair failed and we were unable to recover it. 00:38:46.894 [2024-07-12 11:44:33.244769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:46.894 [2024-07-12 11:44:33.244845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:46.894 [2024-07-12 11:44:33.244866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:46.894 [2024-07-12 11:44:33.244877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:46.894 [2024-07-12 11:44:33.244885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:46.894 [2024-07-12 11:44:33.244905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:46.894 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.254773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.254877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.254898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.254909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.254918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.254938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.264949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.265019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.265040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.265050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.265059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.265079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.274961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.275079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.275101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.275113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.275121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.275147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.285062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.285161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.285181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.285195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.285203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.285224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.294929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.295004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.295024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.295035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.295043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.295064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.304931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.305003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.305024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.305035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.305043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.305062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.315172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.315277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.315297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.315309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.315317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.315338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.325069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.154 [2024-07-12 11:44:33.325151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.154 [2024-07-12 11:44:33.325171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.154 [2024-07-12 11:44:33.325182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.154 [2024-07-12 11:44:33.325190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.154 [2024-07-12 11:44:33.325211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.154 qpair failed and we were unable to recover it. 00:38:47.154 [2024-07-12 11:44:33.335010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.335084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.335105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.335117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.335125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.335145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.345116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.345190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.345211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.345222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.345230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.345251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.355167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.355285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.355307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.355318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.355326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.355347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.365089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.365165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.365186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.365197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.365205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.365225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.375351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.375463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.375484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.375498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.375507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.375527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.385223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.385300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.385320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.385331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.385339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.385359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.395241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.395317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.395338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.395348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.395357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.395385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.405283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.405361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.405388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.405399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.405408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.405429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.415373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.415449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.415470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.415481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.415489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.415511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.425322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.425410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.425431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.425442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.425451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.425471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.435392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.435503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.435529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.435540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.435549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.435570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.445413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.445491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.445512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.445523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.445531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.445552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.455466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.455543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.455564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.455575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.455583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.455603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.465501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.465596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.155 [2024-07-12 11:44:33.465620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.155 [2024-07-12 11:44:33.465631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.155 [2024-07-12 11:44:33.465639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.155 [2024-07-12 11:44:33.465675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.155 qpair failed and we were unable to recover it. 00:38:47.155 [2024-07-12 11:44:33.475418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.155 [2024-07-12 11:44:33.475495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.156 [2024-07-12 11:44:33.475517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.156 [2024-07-12 11:44:33.475528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.156 [2024-07-12 11:44:33.475536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.156 [2024-07-12 11:44:33.475557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.156 qpair failed and we were unable to recover it. 00:38:47.156 [2024-07-12 11:44:33.485470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.156 [2024-07-12 11:44:33.485556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.156 [2024-07-12 11:44:33.485577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.156 [2024-07-12 11:44:33.485587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.156 [2024-07-12 11:44:33.485596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.156 [2024-07-12 11:44:33.485617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.156 qpair failed and we were unable to recover it. 00:38:47.156 [2024-07-12 11:44:33.495585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.156 [2024-07-12 11:44:33.495659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.156 [2024-07-12 11:44:33.495680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.156 [2024-07-12 11:44:33.495690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.156 [2024-07-12 11:44:33.495699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.156 [2024-07-12 11:44:33.495719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.156 qpair failed and we were unable to recover it. 00:38:47.156 [2024-07-12 11:44:33.505478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.156 [2024-07-12 11:44:33.505552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.156 [2024-07-12 11:44:33.505573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.156 [2024-07-12 11:44:33.505584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.156 [2024-07-12 11:44:33.505592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.156 [2024-07-12 11:44:33.505616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.156 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.515661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.515776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.515799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.515811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.515819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.515839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.525551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.525622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.525643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.525653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.525663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.525683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.535699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.535788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.535809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.535826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.535834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.535855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.545914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.545985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.546007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.546017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.546025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.546046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.555678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.555767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.555790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.555801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.555809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.555829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.565663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.565734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.565755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.565766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.565774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.565795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.575796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.575870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.575891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.575901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.575909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.575930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.585823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.585894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.585915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.585926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.585934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.585955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.595879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.595967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.595987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.595998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.596009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.596029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.605831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.605922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.605942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.605953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.605961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.605982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.615904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.615981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.616002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.616013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.616022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.616042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.625949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.626052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.626072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.626084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.626092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.626112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.635959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.636043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.416 [2024-07-12 11:44:33.636064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.416 [2024-07-12 11:44:33.636074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.416 [2024-07-12 11:44:33.636082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.416 [2024-07-12 11:44:33.636103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.416 qpair failed and we were unable to recover it. 00:38:47.416 [2024-07-12 11:44:33.645945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.416 [2024-07-12 11:44:33.646024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.646045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.646056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.646064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.646085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.656055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.656124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.656145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.656156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.656164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.656185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.666030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.666096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.666117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.666127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.666135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.666157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.676019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.676095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.676116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.676127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.676135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.676155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.686054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.686128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.686148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.686162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.686170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.686191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.696150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.696227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.696248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.696259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.696267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.696291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.706111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.706185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.706206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.706216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.706225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.706246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.716263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.716402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.716424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.716436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.716444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.716464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.726223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.726298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.726320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.726331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.726339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.726359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.736306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.736388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.736409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.736419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.736427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.736447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.746264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.746337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.746358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.746369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.746453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.746475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.756255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.756326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.756346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.756357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.756365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.756394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.417 [2024-07-12 11:44:33.766288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.417 [2024-07-12 11:44:33.766365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.417 [2024-07-12 11:44:33.766393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.417 [2024-07-12 11:44:33.766405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.417 [2024-07-12 11:44:33.766413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.417 [2024-07-12 11:44:33.766434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.417 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.776369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.776452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.776473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.776486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.776495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.776515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.786478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.786551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.786573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.786584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.786593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.786613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.796505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.796580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.796602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.796612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.796620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.796640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.806447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.806530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.806551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.806562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.806570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.806591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.816502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.816589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.816609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.816619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.816627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.816649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.826511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.826585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.826605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.826616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.826625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.826645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.836534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.836608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.836630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.836640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.836648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.836670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.846548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.846631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.846652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.846663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.846672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.846693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.856647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.856726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.856746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.856757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.856766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.856786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.866673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.866744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.866768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.866779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.866787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.866808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.876682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.876761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.678 [2024-07-12 11:44:33.876781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.678 [2024-07-12 11:44:33.876792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.678 [2024-07-12 11:44:33.876800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.678 [2024-07-12 11:44:33.876820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.678 qpair failed and we were unable to recover it. 00:38:47.678 [2024-07-12 11:44:33.886628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.678 [2024-07-12 11:44:33.886702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.886723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.886734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.886742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.886762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.896731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.896854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.896877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.896888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.896896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.896916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.906753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.906827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.906848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.906858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.906867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.906890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.916828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.916948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.916970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.916981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.916989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.917009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.926794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.926870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.926890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.926900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.926908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.926932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.936855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.936935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.936955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.936966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.936973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.936994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.946835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.946907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.946928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.946938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.946947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.946967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.956900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.956972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.956995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.957006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.957014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.957034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.966907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.966983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.967004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.967015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.967023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.967043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.976987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.977063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.977084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.977094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.977103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.977123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.987002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.987120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.987142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.987152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.987161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.987181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:33.997012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:33.997101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:33.997122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:33.997133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:33.997144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:33.997165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:34.007077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:34.007169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:34.007190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:34.007200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:34.007208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:34.007228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:34.017085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:34.017161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:34.017182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:34.017193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:34.017201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:34.017223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.679 [2024-07-12 11:44:34.027115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.679 [2024-07-12 11:44:34.027191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.679 [2024-07-12 11:44:34.027212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.679 [2024-07-12 11:44:34.027223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.679 [2024-07-12 11:44:34.027232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.679 [2024-07-12 11:44:34.027253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.679 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.037138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.037230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.037251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.037262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.037271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.037291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.047240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.047320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.047342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.047353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.047367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.047396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.057166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.057240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.057261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.057272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.057280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.057301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.067230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.067307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.067327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.067338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.067346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.067366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.077342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.077440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.077461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.077472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.077481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.077501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.087201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.087276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.087297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.087307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.087319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.087339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.097262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.097334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.097355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.097367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.097375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.097401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.107269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.107372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.107398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.107409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.107417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.107438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.117312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.117393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.117414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.117424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.117433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.117453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.127401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.127490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.127511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.127521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.127530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.127551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.137482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.137584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.137605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.137617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.137625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.137645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.939 [2024-07-12 11:44:34.147429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.939 [2024-07-12 11:44:34.147505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.939 [2024-07-12 11:44:34.147526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.939 [2024-07-12 11:44:34.147536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.939 [2024-07-12 11:44:34.147544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.939 [2024-07-12 11:44:34.147565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.939 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.157513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.157632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.157654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.157666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.157674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.157700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.167516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.167618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.167639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.167650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.167658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.167678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.177585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.177658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.177679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.177692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.177701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.177720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.187588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.187660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.187681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.187692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.187700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.187720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.197560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.197633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.197654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.197664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.197673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.197694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.207589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.207679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.207701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.207711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.207720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.207740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.217622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.217768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.217790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.217801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.217809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.217830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.227689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.227788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.227809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.227819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.227828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.227848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.237748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.237822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.237844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.237856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.237865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.237885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.247721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.247800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.247821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.247832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.247841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.247862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.257850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.257935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.257956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.257967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.257976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.257996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.267769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.267846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.267870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.267881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.267890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.267910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.277843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.277918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.277938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.277950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.277958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.277979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:47.940 [2024-07-12 11:44:34.287903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:47.940 [2024-07-12 11:44:34.287989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:47.940 [2024-07-12 11:44:34.288010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:47.940 [2024-07-12 11:44:34.288021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:47.940 [2024-07-12 11:44:34.288029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:47.940 [2024-07-12 11:44:34.288050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:47.940 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.297914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.298006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.298030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.298041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.298051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.298072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.308006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.308106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.308132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.308143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.308153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.308177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.318039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.318136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.318157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.318168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.318177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.318197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.328024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.328100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.328122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.328133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.328141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.328163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.338134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.338241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.338265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.338276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.338286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.338307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.348029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.348113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.348134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.348146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.348155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.348176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.358094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.358169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.358194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.358205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.358213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.358234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.368092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.368166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.368188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.368199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.368208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.368228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.378231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.378308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.378330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.378341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.378350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.378370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.388130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.388214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.388235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.388246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.388255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.388280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.398198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.398283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.398304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.398316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.398328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.398348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.408194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.201 [2024-07-12 11:44:34.408274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.201 [2024-07-12 11:44:34.408295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.201 [2024-07-12 11:44:34.408306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.201 [2024-07-12 11:44:34.408315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.201 [2024-07-12 11:44:34.408336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.201 qpair failed and we were unable to recover it. 00:38:48.201 [2024-07-12 11:44:34.418308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.418390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.418412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.418423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.418432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.418454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.428322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.428401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.428423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.428434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.428443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.428464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.438332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.438427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.438449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.438460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.438469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.438490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.448351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.448438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.448459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.448477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.448486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.448508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.458396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.458482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.458503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.458514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.458523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.458545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.468357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.468448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.468469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.468481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.468490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.468511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.478427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.478503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.478524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.478536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.478545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.478565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.488545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.488623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.488644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.488656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.488667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.488688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.498539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.498616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.498638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.498650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.498659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.498680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.508508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.508580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.508602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.508613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.508622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.508643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.518581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.518664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.518686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.518697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.518706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.518726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.528612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.528685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.528707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.528719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.528728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.528749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.538616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.538690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.538712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.538724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.538733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.538753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.202 [2024-07-12 11:44:34.548645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.202 [2024-07-12 11:44:34.548728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.202 [2024-07-12 11:44:34.548750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.202 [2024-07-12 11:44:34.548761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.202 [2024-07-12 11:44:34.548770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.202 [2024-07-12 11:44:34.548791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.202 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.558649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.558725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.558747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.558759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.558768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.558794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.568646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.568721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.568743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.568754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.568763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.568783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.578691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.578762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.578784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.578798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.578807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.578827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.588715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.588801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.588822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.588833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.588841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.588863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.598861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.598935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.598956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.598967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.598976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.598997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.608752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.608827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.608849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.608861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.608870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.608890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.618829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.618897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.618918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.618929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.618938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.618962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.628843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.628923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.628945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.628956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.628965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.628986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.638932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.639040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.639060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.639072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.639081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.463 [2024-07-12 11:44:34.639101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.463 qpair failed and we were unable to recover it. 00:38:48.463 [2024-07-12 11:44:34.648917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.463 [2024-07-12 11:44:34.649000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.463 [2024-07-12 11:44:34.649021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.463 [2024-07-12 11:44:34.649033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.463 [2024-07-12 11:44:34.649042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.649063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.658985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.659056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.659078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.659089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.659098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.659119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.668943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.669040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.669064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.669075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.669084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.669105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.679048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.679122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.679144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.679156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.679164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.679185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.689073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.689178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.689201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.689212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.689222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.689243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.699085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.699164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.699185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.699196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.699205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.699225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.709123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.709245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.709267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.709278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.709287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.709311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.719195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.719272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.719294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.719305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.719314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.719335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.729108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.729196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.729217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.729228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.729237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.729256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.739172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.739247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.739269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.739281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.739290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.739310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.749149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.749221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.749242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.749253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.749262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.749282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.759373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.759457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.759482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.759493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.759501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.759522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.769241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.769319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.769341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.769352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.769360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.769386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.779337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.779423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.779445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.779456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.779465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.779486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.789328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.789417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.464 [2024-07-12 11:44:34.789439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.464 [2024-07-12 11:44:34.789450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.464 [2024-07-12 11:44:34.789458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.464 [2024-07-12 11:44:34.789480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.464 qpair failed and we were unable to recover it. 00:38:48.464 [2024-07-12 11:44:34.799370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.464 [2024-07-12 11:44:34.799469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.465 [2024-07-12 11:44:34.799490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.465 [2024-07-12 11:44:34.799502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.465 [2024-07-12 11:44:34.799510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.465 [2024-07-12 11:44:34.799534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.465 qpair failed and we were unable to recover it. 00:38:48.465 [2024-07-12 11:44:34.809337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.465 [2024-07-12 11:44:34.809444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.465 [2024-07-12 11:44:34.809465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.465 [2024-07-12 11:44:34.809476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.465 [2024-07-12 11:44:34.809485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.465 [2024-07-12 11:44:34.809505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.465 qpair failed and we were unable to recover it. 00:38:48.465 [2024-07-12 11:44:34.819388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.465 [2024-07-12 11:44:34.819470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.724 [2024-07-12 11:44:34.819491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.724 [2024-07-12 11:44:34.819509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.724 [2024-07-12 11:44:34.819519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.724 [2024-07-12 11:44:34.819541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.724 qpair failed and we were unable to recover it. 00:38:48.724 [2024-07-12 11:44:34.829331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.724 [2024-07-12 11:44:34.829406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.724 [2024-07-12 11:44:34.829428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.724 [2024-07-12 11:44:34.829439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.724 [2024-07-12 11:44:34.829448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.724 [2024-07-12 11:44:34.829469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.724 qpair failed and we were unable to recover it. 00:38:48.724 [2024-07-12 11:44:34.839535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.724 [2024-07-12 11:44:34.839611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.724 [2024-07-12 11:44:34.839633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.724 [2024-07-12 11:44:34.839644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.724 [2024-07-12 11:44:34.839653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.724 [2024-07-12 11:44:34.839673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.724 qpair failed and we were unable to recover it. 00:38:48.724 [2024-07-12 11:44:34.849553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.724 [2024-07-12 11:44:34.849641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.724 [2024-07-12 11:44:34.849663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.724 [2024-07-12 11:44:34.849674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.724 [2024-07-12 11:44:34.849683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.849708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.859638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.859741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.859762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.859774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.859783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.859803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.869573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.869651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.869673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.869684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.869693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.869715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.879662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.879740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.879762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.879773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.879782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.879803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.889653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.889758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.889778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.889789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.889801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.889822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.899742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.899828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.899849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.899861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.899869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.899890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.909676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.909751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.909773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.909784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.909793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.909814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.919720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.919796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.919817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.919829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.919838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.919859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.929791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.929872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.929893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.929904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.929913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.929933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.939790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.939861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.939883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.939894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.939902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.939924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.949714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.949788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.949810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.949821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.949830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.949851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.959860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.959935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.959957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.959968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.959977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.959998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.969871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.969948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.969970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.969981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.725 [2024-07-12 11:44:34.969989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.725 [2024-07-12 11:44:34.970010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.725 qpair failed and we were unable to recover it. 00:38:48.725 [2024-07-12 11:44:34.979898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.725 [2024-07-12 11:44:34.980018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.725 [2024-07-12 11:44:34.980039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.725 [2024-07-12 11:44:34.980054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:34.980063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:34.980083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:34.989922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:34.989996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:34.990018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:34.990029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:34.990038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:34.990058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:34.999847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:34.999945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:34.999966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:34.999978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:34.999986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.000008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.009968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.010038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.010059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.010071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.010079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.010101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.020014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.020090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.020110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.020122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.020131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.020152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.030099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.030172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.030193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.030205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.030214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.030235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.040136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.040213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.040235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.040247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.040256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.040276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.050208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.050289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.050312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.050324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.050333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.050354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.060281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.060394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.060416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.060428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.060437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.060460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.070137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.070227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.070249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.070264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.070272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.070293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.726 [2024-07-12 11:44:35.080168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.726 [2024-07-12 11:44:35.080243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.726 [2024-07-12 11:44:35.080265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.726 [2024-07-12 11:44:35.080276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.726 [2024-07-12 11:44:35.080285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.726 [2024-07-12 11:44:35.080309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.726 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.090249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.090338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.090361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.090373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.090387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.090409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.100347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.100430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.100452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.100464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.100473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.100494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.110259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.110339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.110362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.110373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.110388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.110409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.120360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.120460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.120482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.120494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.120503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.120523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.130273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.130394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.130416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.130428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.130437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.130458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.140503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.140583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.140605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.140617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.140626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.140648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.150443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.150521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.150543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.150555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.150564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.150584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.160405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.160484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.160509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.160521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.160529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.160550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.170351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.170432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.987 [2024-07-12 11:44:35.170454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.987 [2024-07-12 11:44:35.170466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.987 [2024-07-12 11:44:35.170475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.987 [2024-07-12 11:44:35.170496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.987 qpair failed and we were unable to recover it. 00:38:48.987 [2024-07-12 11:44:35.180474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.987 [2024-07-12 11:44:35.180550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.180572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.180584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.180593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.180614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.190562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.190685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.190708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.190720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.190730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.190751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.200514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.200591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.200613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.200625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.200633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.200657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.210626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.210701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.210723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.210735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.210744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.210764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.220635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.220710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.220730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.220742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.220751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.220772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.230651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.230750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.230772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.230783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.230792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.230816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.240716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.240792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.240813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.240825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.240833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.240854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.250705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.250798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.250822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.250833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.250842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.250863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.260727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.260802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.260823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.260835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.260843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.260865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.270736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.270830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.270852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.270863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.270872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.270893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.280754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.280871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.280892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.280903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.280912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.280932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.290707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.290790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.290811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.290822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.290833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.290855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.300746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.300864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.300885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.300896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.300906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.300926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.310970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.311040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.311061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.311073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.988 [2024-07-12 11:44:35.311082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.988 [2024-07-12 11:44:35.311119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.988 qpair failed and we were unable to recover it. 00:38:48.988 [2024-07-12 11:44:35.320894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.988 [2024-07-12 11:44:35.320981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.988 [2024-07-12 11:44:35.321002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.988 [2024-07-12 11:44:35.321013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.989 [2024-07-12 11:44:35.321021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.989 [2024-07-12 11:44:35.321042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.989 qpair failed and we were unable to recover it. 00:38:48.989 [2024-07-12 11:44:35.330836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.989 [2024-07-12 11:44:35.330918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.989 [2024-07-12 11:44:35.330939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.989 [2024-07-12 11:44:35.330951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.989 [2024-07-12 11:44:35.330981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.989 [2024-07-12 11:44:35.331003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.989 qpair failed and we were unable to recover it. 00:38:48.989 [2024-07-12 11:44:35.340975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:48.989 [2024-07-12 11:44:35.341076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:48.989 [2024-07-12 11:44:35.341098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:48.989 [2024-07-12 11:44:35.341109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:48.989 [2024-07-12 11:44:35.341118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:48.989 [2024-07-12 11:44:35.341139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:48.989 qpair failed and we were unable to recover it. 00:38:49.250 [2024-07-12 11:44:35.350987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.250 [2024-07-12 11:44:35.351090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.250 [2024-07-12 11:44:35.351112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.250 [2024-07-12 11:44:35.351124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.250 [2024-07-12 11:44:35.351133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.250 [2024-07-12 11:44:35.351154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.250 qpair failed and we were unable to recover it. 00:38:49.250 [2024-07-12 11:44:35.360994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.250 [2024-07-12 11:44:35.361069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.250 [2024-07-12 11:44:35.361089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.250 [2024-07-12 11:44:35.361101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.250 [2024-07-12 11:44:35.361110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.250 [2024-07-12 11:44:35.361131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.250 qpair failed and we were unable to recover it. 00:38:49.250 [2024-07-12 11:44:35.371046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.250 [2024-07-12 11:44:35.371153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.250 [2024-07-12 11:44:35.371174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.250 [2024-07-12 11:44:35.371186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.250 [2024-07-12 11:44:35.371195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.250 [2024-07-12 11:44:35.371216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.250 qpair failed and we were unable to recover it. 00:38:49.250 [2024-07-12 11:44:35.381175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.250 [2024-07-12 11:44:35.381277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.250 [2024-07-12 11:44:35.381298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.250 [2024-07-12 11:44:35.381312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.250 [2024-07-12 11:44:35.381322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.250 [2024-07-12 11:44:35.381343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.250 qpair failed and we were unable to recover it. 00:38:49.250 [2024-07-12 11:44:35.391111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.250 [2024-07-12 11:44:35.391201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.250 [2024-07-12 11:44:35.391222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.250 [2024-07-12 11:44:35.391233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.250 [2024-07-12 11:44:35.391242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.250 [2024-07-12 11:44:35.391263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.250 qpair failed and we were unable to recover it. 00:38:49.250 [2024-07-12 11:44:35.401167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.250 [2024-07-12 11:44:35.401248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.250 [2024-07-12 11:44:35.401269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.250 [2024-07-12 11:44:35.401280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.250 [2024-07-12 11:44:35.401289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.250 [2024-07-12 11:44:35.401310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.250 qpair failed and we were unable to recover it. 00:38:49.250 [2024-07-12 11:44:35.411207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.250 [2024-07-12 11:44:35.411285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.250 [2024-07-12 11:44:35.411306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.250 [2024-07-12 11:44:35.411318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.250 [2024-07-12 11:44:35.411326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.250 [2024-07-12 11:44:35.411347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.250 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.421228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.421312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.421333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.421344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.421353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.421373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.431250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.431363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.431389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.431401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.431410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.431431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.441230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.441336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.441358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.441369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.441383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.441405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.451272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.451386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.451408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.451420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.451429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.451450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.461345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.461442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.461463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.461474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.461483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.461505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.471328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.471406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.471427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.471440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.471449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.471469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.481414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.481486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.481508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.481520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.481528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.481550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.491402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.491498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.491519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.491530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.491539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.491560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.501466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.501583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.501607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.501618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.501628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.501649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.511454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.511527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.511549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.511560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.511569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.511590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.521453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.521531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.521553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.521565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.521573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.521594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.531527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.531603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.531624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.531636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.531644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.531664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.541482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.541557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.541578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.251 [2024-07-12 11:44:35.541590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.251 [2024-07-12 11:44:35.541599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.251 [2024-07-12 11:44:35.541622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.251 qpair failed and we were unable to recover it. 00:38:49.251 [2024-07-12 11:44:35.551568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.251 [2024-07-12 11:44:35.551643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.251 [2024-07-12 11:44:35.551664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.252 [2024-07-12 11:44:35.551676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.252 [2024-07-12 11:44:35.551685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.252 [2024-07-12 11:44:35.551706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.252 qpair failed and we were unable to recover it. 00:38:49.252 [2024-07-12 11:44:35.561600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.252 [2024-07-12 11:44:35.561707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.252 [2024-07-12 11:44:35.561732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.252 [2024-07-12 11:44:35.561744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.252 [2024-07-12 11:44:35.561753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.252 [2024-07-12 11:44:35.561774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.252 qpair failed and we were unable to recover it. 00:38:49.252 [2024-07-12 11:44:35.571611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.252 [2024-07-12 11:44:35.571687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.252 [2024-07-12 11:44:35.571709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.252 [2024-07-12 11:44:35.571721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.252 [2024-07-12 11:44:35.571729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.252 [2024-07-12 11:44:35.571749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.252 qpair failed and we were unable to recover it. 00:38:49.252 [2024-07-12 11:44:35.581545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.252 [2024-07-12 11:44:35.581615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.252 [2024-07-12 11:44:35.581637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.252 [2024-07-12 11:44:35.581648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.252 [2024-07-12 11:44:35.581658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.252 [2024-07-12 11:44:35.581680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.252 qpair failed and we were unable to recover it. 00:38:49.252 [2024-07-12 11:44:35.591604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.252 [2024-07-12 11:44:35.591728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.252 [2024-07-12 11:44:35.591755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.252 [2024-07-12 11:44:35.591767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.252 [2024-07-12 11:44:35.591775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.252 [2024-07-12 11:44:35.591797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.252 qpair failed and we were unable to recover it. 00:38:49.252 [2024-07-12 11:44:35.601544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.252 [2024-07-12 11:44:35.601618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.252 [2024-07-12 11:44:35.601639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.252 [2024-07-12 11:44:35.601651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.252 [2024-07-12 11:44:35.601660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.252 [2024-07-12 11:44:35.601685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.252 qpair failed and we were unable to recover it. 00:38:49.511 [2024-07-12 11:44:35.611733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.511 [2024-07-12 11:44:35.611866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.511 [2024-07-12 11:44:35.611888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.511 [2024-07-12 11:44:35.611900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.511 [2024-07-12 11:44:35.611908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.511 [2024-07-12 11:44:35.611930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.511 qpair failed and we were unable to recover it. 00:38:49.511 [2024-07-12 11:44:35.621693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.511 [2024-07-12 11:44:35.621768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.511 [2024-07-12 11:44:35.621790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.511 [2024-07-12 11:44:35.621801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.511 [2024-07-12 11:44:35.621810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.511 [2024-07-12 11:44:35.621830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.511 qpair failed and we were unable to recover it. 00:38:49.511 [2024-07-12 11:44:35.631743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.511 [2024-07-12 11:44:35.631823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.631844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.631856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.631864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.631885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.641736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.641811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.641833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.641844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.641853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.641875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.651856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.651979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.652003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.652014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.652023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.652044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.661802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.661888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.661910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.661922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.661930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.661950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.671853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.671948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.671969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.671980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.671989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.672009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.681852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.681966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.681987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.681998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.682007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.682027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.691988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.692061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.692082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.692093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.692105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.692126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.701876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.701954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.701975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.701986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.701995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.702015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.711947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.712033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.712056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.712067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.712076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.712097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.722061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.722136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.722158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.722169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.512 [2024-07-12 11:44:35.722177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.512 [2024-07-12 11:44:35.722198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.512 qpair failed and we were unable to recover it. 00:38:49.512 [2024-07-12 11:44:35.732077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.512 [2024-07-12 11:44:35.732161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.512 [2024-07-12 11:44:35.732185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.512 [2024-07-12 11:44:35.732197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.732207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.732229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.742026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.742111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.742132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.742144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.742152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.742173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.752074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.752183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.752204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.752215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.752224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.752245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.762149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.762230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.762251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.762263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.762271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.762292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.772213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.772304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.772326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.772337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.772346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.772370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.782164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.782240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.782261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.782272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.782284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.782305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.792225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.792298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.792320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.792332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.792341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.792362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.802231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.802309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.802331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.802342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.802352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.802373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.812368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.812454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.812476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.812488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.812497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.812518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.822309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.822383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.822405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.822416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.822424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.822446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.832315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.832397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.832419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.832430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.832438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.832459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.513 [2024-07-12 11:44:35.842328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.513 [2024-07-12 11:44:35.842414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.513 [2024-07-12 11:44:35.842435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.513 [2024-07-12 11:44:35.842447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.513 [2024-07-12 11:44:35.842455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.513 [2024-07-12 11:44:35.842482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.513 qpair failed and we were unable to recover it. 00:38:49.514 [2024-07-12 11:44:35.852482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.514 [2024-07-12 11:44:35.852563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.514 [2024-07-12 11:44:35.852584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.514 [2024-07-12 11:44:35.852596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.514 [2024-07-12 11:44:35.852605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.514 [2024-07-12 11:44:35.852626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.514 qpair failed and we were unable to recover it. 00:38:49.514 [2024-07-12 11:44:35.862501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.514 [2024-07-12 11:44:35.862579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.514 [2024-07-12 11:44:35.862600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.514 [2024-07-12 11:44:35.862612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.514 [2024-07-12 11:44:35.862620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.514 [2024-07-12 11:44:35.862640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.514 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.872494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.872577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.872598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.872612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.872621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.872641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.882472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.882557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.882578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.882588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.882597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.882618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.892628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.892738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.892759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.892769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.892778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.892799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.902610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.902684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.902705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.902716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.902725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.902746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.912596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.912676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.912697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.912708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.912717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.912738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.922640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.922715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.922737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.922748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.922756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.922777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.932696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.932778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.932800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.932810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.932819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.932839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.942729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.942801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.942822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.942834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.942843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.942864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.952720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.952805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.952827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.952838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.952846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.952867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.962684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.773 [2024-07-12 11:44:35.962765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.773 [2024-07-12 11:44:35.962789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.773 [2024-07-12 11:44:35.962800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.773 [2024-07-12 11:44:35.962809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.773 [2024-07-12 11:44:35.962829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.773 qpair failed and we were unable to recover it. 00:38:49.773 [2024-07-12 11:44:35.972780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.774 [2024-07-12 11:44:35.972849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.774 [2024-07-12 11:44:35.972871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.774 [2024-07-12 11:44:35.972883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.774 [2024-07-12 11:44:35.972893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032d780 00:38:49.774 [2024-07-12 11:44:35.972920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:49.774 qpair failed and we were unable to recover it. 00:38:49.774 [2024-07-12 11:44:35.982879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.774 [2024-07-12 11:44:35.982971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.774 [2024-07-12 11:44:35.983006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.774 [2024-07-12 11:44:35.983025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.774 [2024-07-12 11:44:35.983040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032ff80 00:38:49.774 [2024-07-12 11:44:35.983079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:49.774 qpair failed and we were unable to recover it. 00:38:49.774 [2024-07-12 11:44:35.992856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.774 [2024-07-12 11:44:35.992932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.774 [2024-07-12 11:44:35.992957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.774 [2024-07-12 11:44:35.992971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.774 [2024-07-12 11:44:35.992981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032ff80 00:38:49.774 [2024-07-12 11:44:35.993011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:49.774 qpair failed and we were unable to recover it. 00:38:49.774 [2024-07-12 11:44:36.002774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.774 [2024-07-12 11:44:36.002864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.774 [2024-07-12 11:44:36.002897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.774 [2024-07-12 11:44:36.002915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.774 [2024-07-12 11:44:36.002928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000350000 00:38:49.774 [2024-07-12 11:44:36.002963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:49.774 qpair failed and we were unable to recover it. 00:38:49.774 [2024-07-12 11:44:36.012980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.774 [2024-07-12 11:44:36.013069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.774 [2024-07-12 11:44:36.013093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.774 [2024-07-12 11:44:36.013105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.774 [2024-07-12 11:44:36.013115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x615000350000 00:38:49.774 [2024-07-12 11:44:36.013139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:49.774 qpair failed and we were unable to recover it. 00:38:49.774 [2024-07-12 11:44:36.022931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.774 [2024-07-12 11:44:36.023054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.774 [2024-07-12 11:44:36.023081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.774 [2024-07-12 11:44:36.023094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.774 [2024-07-12 11:44:36.023103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500033fe80 00:38:49.774 [2024-07-12 11:44:36.023126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:49.774 qpair failed and we were unable to recover it. 00:38:49.774 [2024-07-12 11:44:36.032998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:49.774 [2024-07-12 11:44:36.033078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:49.774 [2024-07-12 11:44:36.033100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:49.774 [2024-07-12 11:44:36.033112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:49.774 [2024-07-12 11:44:36.033120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500033fe80 00:38:49.774 [2024-07-12 11:44:36.033143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:49.774 qpair failed and we were unable to recover it. 00:38:49.774 Controller properly reset. 00:38:50.033 Initializing NVMe Controllers 00:38:50.033 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:50.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:50.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:50.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:50.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:50.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:50.033 Initialization complete. Launching workers. 00:38:50.033 Starting thread on core 1 00:38:50.033 Starting thread on core 2 00:38:50.033 Starting thread on core 3 00:38:50.033 Starting thread on core 0 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:50.033 00:38:50.033 real 0m11.545s 00:38:50.033 user 0m21.196s 00:38:50.033 sys 0m4.456s 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.033 ************************************ 00:38:50.033 END TEST nvmf_target_disconnect_tc2 00:38:50.033 ************************************ 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:50.033 rmmod nvme_tcp 00:38:50.033 rmmod nvme_fabrics 00:38:50.033 rmmod nvme_keyring 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1182259 ']' 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1182259 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1182259 ']' 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1182259 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1182259 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1182259' 00:38:50.033 killing process with pid 1182259 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1182259 00:38:50.033 11:44:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1182259 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:51.936 11:44:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.840 11:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:53.840 00:38:53.840 real 0m20.851s 00:38:53.840 user 0m50.889s 00:38:53.840 sys 0m9.050s 00:38:53.840 11:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:53.840 11:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:53.840 ************************************ 00:38:53.840 END TEST nvmf_target_disconnect 00:38:53.840 ************************************ 00:38:53.840 11:44:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:53.840 11:44:39 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:38:53.840 11:44:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:53.840 11:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.840 11:44:39 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:38:53.840 00:38:53.840 real 30m11.059s 00:38:53.840 user 77m44.264s 00:38:53.840 sys 7m3.887s 00:38:53.840 11:44:39 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:53.840 11:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.840 ************************************ 00:38:53.840 END TEST nvmf_tcp 00:38:53.840 ************************************ 00:38:53.840 11:44:40 -- common/autotest_common.sh@1142 -- # return 0 00:38:53.840 11:44:40 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:38:53.840 11:44:40 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:53.840 11:44:40 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:53.840 11:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:53.840 11:44:40 -- common/autotest_common.sh@10 -- # set +x 00:38:53.840 ************************************ 00:38:53.840 START TEST spdkcli_nvmf_tcp 00:38:53.840 ************************************ 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:53.840 * Looking for test storage... 00:38:53.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1184013 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1184013 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1184013 ']' 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.840 11:44:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:54.100 [2024-07-12 11:44:40.224579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:54.100 [2024-07-12 11:44:40.224666] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184013 ] 00:38:54.100 EAL: No free 2048 kB hugepages reported on node 1 00:38:54.100 [2024-07-12 11:44:40.326978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:54.360 [2024-07-12 11:44:40.541744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.360 [2024-07-12 11:44:40.541754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.927 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:54.927 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:38:54.928 11:44:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:54.928 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:54.928 11:44:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:54.928 11:44:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:54.928 11:44:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:54.928 11:44:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:54.928 11:44:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:54.928 11:44:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:54.928 11:44:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:54.928 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:54.928 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:54.928 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:54.928 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:54.928 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:54.928 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:54.928 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:54.928 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:54.928 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:54.928 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:54.928 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:54.928 ' 00:38:57.463 [2024-07-12 11:44:43.799569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.836 [2024-07-12 11:44:45.080024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:01.365 [2024-07-12 11:44:47.459580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:03.268 [2024-07-12 11:44:49.510369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:04.706 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:04.706 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:04.706 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:04.706 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:04.706 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:04.706 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:04.706 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:04.706 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:04.706 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:04.706 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:04.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:04.706 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:04.965 11:44:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:05.224 11:44:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:05.483 11:44:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:05.483 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:05.483 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:05.483 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:05.483 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:05.483 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:05.483 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:05.483 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:05.483 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:05.483 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:05.483 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:05.483 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:05.483 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:05.483 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:05.483 ' 00:39:10.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:10.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:10.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:10.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:10.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:10.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:10.763 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:10.763 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:10.763 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:10.763 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:10.763 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:10.763 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:10.763 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:10.763 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1184013 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1184013 ']' 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1184013 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184013 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184013' 00:39:11.023 killing process with pid 1184013 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1184013 00:39:11.023 11:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1184013 00:39:12.407 11:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:12.407 11:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:12.407 11:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1184013 ']' 00:39:12.407 11:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1184013 00:39:12.407 11:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1184013 ']' 00:39:12.407 11:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1184013 00:39:12.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1184013) - No such process 00:39:12.408 11:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1184013 is not found' 00:39:12.408 Process with pid 1184013 is not found 00:39:12.408 11:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:12.408 11:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:12.408 11:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:12.408 00:39:12.408 real 0m18.477s 00:39:12.408 user 0m38.620s 00:39:12.408 sys 0m0.950s 00:39:12.408 11:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:12.408 11:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:12.408 ************************************ 00:39:12.408 END TEST spdkcli_nvmf_tcp 00:39:12.408 ************************************ 00:39:12.408 11:44:58 -- common/autotest_common.sh@1142 -- # return 0 00:39:12.408 11:44:58 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:12.408 11:44:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:12.408 11:44:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:12.408 11:44:58 -- common/autotest_common.sh@10 -- # set +x 00:39:12.408 ************************************ 00:39:12.408 START TEST nvmf_identify_passthru 00:39:12.408 ************************************ 00:39:12.408 11:44:58 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:12.408 * Looking for test storage... 00:39:12.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:12.408 11:44:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.408 11:44:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.408 11:44:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.408 11:44:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:12.408 11:44:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.408 11:44:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.408 11:44:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.408 11:44:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:12.408 11:44:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.408 11:44:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.408 11:44:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:12.408 11:44:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:12.408 11:44:58 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:39:12.408 11:44:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.678 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:17.679 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:17.679 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:17.679 Found net devices under 0000:86:00.0: cvl_0_0 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:17.679 Found net devices under 0000:86:00.1: cvl_0_1 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.679 11:45:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.679 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.679 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.679 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:17.679 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:17.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:39:17.938 00:39:17.938 --- 10.0.0.2 ping statistics --- 00:39:17.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.938 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:39:17.938 00:39:17.938 --- 10.0.0.1 ping statistics --- 00:39:17.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.938 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:17.938 11:45:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:17.938 11:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:17.938 11:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:39:17.938 11:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:39:17.938 11:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:39:17.938 11:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:39:17.938 11:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:39:17.938 11:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:17.938 11:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:18.197 EAL: No free 2048 kB hugepages reported on node 1 00:39:22.385 11:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:39:22.385 11:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:39:22.385 11:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:22.385 11:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:22.385 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.572 11:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:39:26.572 11:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:26.572 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:26.572 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:26.572 11:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:26.572 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:26.572 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:26.572 11:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1191898 00:39:26.572 11:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:26.572 11:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:26.573 11:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1191898 00:39:26.573 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1191898 ']' 00:39:26.573 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.573 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:26.573 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.573 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:26.573 11:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:26.573 [2024-07-12 11:45:12.907168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:26.573 [2024-07-12 11:45:12.907253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:26.831 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.831 [2024-07-12 11:45:13.015630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:27.091 [2024-07-12 11:45:13.228872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:27.091 [2024-07-12 11:45:13.228915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:27.091 [2024-07-12 11:45:13.228926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:27.091 [2024-07-12 11:45:13.228935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:27.091 [2024-07-12 11:45:13.228944] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:27.091 [2024-07-12 11:45:13.229071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.091 [2024-07-12 11:45:13.229259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:27.091 [2024-07-12 11:45:13.229335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.091 [2024-07-12 11:45:13.229341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:27.350 11:45:13 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:27.350 11:45:13 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:39:27.350 11:45:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:27.350 11:45:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.350 11:45:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:27.350 INFO: Log level set to 20 00:39:27.350 INFO: Requests: 00:39:27.350 { 00:39:27.350 "jsonrpc": "2.0", 00:39:27.350 "method": "nvmf_set_config", 00:39:27.350 "id": 1, 00:39:27.350 "params": { 00:39:27.350 "admin_cmd_passthru": { 00:39:27.350 "identify_ctrlr": true 00:39:27.350 } 00:39:27.350 } 00:39:27.350 } 00:39:27.350 00:39:27.350 INFO: response: 00:39:27.350 { 00:39:27.350 "jsonrpc": "2.0", 00:39:27.350 "id": 1, 00:39:27.350 "result": true 00:39:27.350 } 00:39:27.350 00:39:27.350 11:45:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.350 11:45:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:27.350 11:45:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.350 11:45:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:27.350 INFO: Setting log level to 20 00:39:27.350 INFO: Setting log level to 20 00:39:27.350 INFO: Log level set to 20 00:39:27.350 INFO: Log level set to 20 00:39:27.350 INFO: Requests: 00:39:27.350 { 00:39:27.350 "jsonrpc": "2.0", 00:39:27.350 "method": "framework_start_init", 00:39:27.350 "id": 1 00:39:27.350 } 00:39:27.350 00:39:27.350 INFO: Requests: 00:39:27.350 { 00:39:27.350 "jsonrpc": "2.0", 00:39:27.350 "method": "framework_start_init", 00:39:27.350 "id": 1 00:39:27.350 } 00:39:27.350 00:39:27.919 [2024-07-12 11:45:14.099729] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:27.919 INFO: response: 00:39:27.919 { 00:39:27.919 "jsonrpc": "2.0", 00:39:27.919 "id": 1, 00:39:27.919 "result": true 00:39:27.919 } 00:39:27.919 00:39:27.919 INFO: response: 00:39:27.919 { 00:39:27.919 "jsonrpc": "2.0", 00:39:27.919 "id": 1, 00:39:27.919 "result": true 00:39:27.919 } 00:39:27.919 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.919 11:45:14 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:27.919 INFO: Setting log level to 40 00:39:27.919 INFO: Setting log level to 40 00:39:27.919 INFO: Setting log level to 40 00:39:27.919 [2024-07-12 11:45:14.119039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.919 11:45:14 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:27.919 11:45:14 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.919 11:45:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:31.212 Nvme0n1 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.212 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.212 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.212 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:31.212 [2024-07-12 11:45:17.069828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.212 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:31.212 [ 00:39:31.212 { 00:39:31.212 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:31.212 "subtype": "Discovery", 00:39:31.212 "listen_addresses": [], 00:39:31.212 "allow_any_host": true, 00:39:31.212 "hosts": [] 00:39:31.212 }, 00:39:31.212 { 00:39:31.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:31.212 "subtype": "NVMe", 00:39:31.212 "listen_addresses": [ 00:39:31.212 { 00:39:31.212 "trtype": "TCP", 00:39:31.212 "adrfam": "IPv4", 00:39:31.212 "traddr": "10.0.0.2", 00:39:31.212 "trsvcid": "4420" 00:39:31.212 } 00:39:31.212 ], 00:39:31.212 "allow_any_host": true, 00:39:31.212 "hosts": [], 00:39:31.212 "serial_number": "SPDK00000000000001", 00:39:31.212 "model_number": "SPDK bdev Controller", 00:39:31.212 "max_namespaces": 1, 00:39:31.212 "min_cntlid": 1, 00:39:31.212 "max_cntlid": 65519, 00:39:31.212 "namespaces": [ 00:39:31.212 { 00:39:31.212 "nsid": 1, 00:39:31.212 "bdev_name": "Nvme0n1", 00:39:31.212 "name": "Nvme0n1", 00:39:31.212 "nguid": "98F9EA195A0342D39BEB8BA0651D8599", 00:39:31.212 "uuid": "98f9ea19-5a03-42d3-9beb-8ba0651d8599" 00:39:31.212 } 00:39:31.212 ] 00:39:31.212 } 00:39:31.212 ] 00:39:31.212 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.212 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:31.212 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:31.212 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:31.213 EAL: No free 2048 kB hugepages reported on node 1 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:31.213 EAL: No free 2048 kB hugepages reported on node 1 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:39:31.213 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:31.213 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.213 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.472 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:31.472 11:45:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:31.472 rmmod nvme_tcp 00:39:31.472 rmmod nvme_fabrics 00:39:31.472 rmmod nvme_keyring 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1191898 ']' 00:39:31.472 11:45:17 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1191898 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1191898 ']' 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1191898 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1191898 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1191898' 00:39:31.472 killing process with pid 1191898 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1191898 00:39:31.472 11:45:17 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1191898 00:39:34.018 11:45:20 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:34.018 11:45:20 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:34.018 11:45:20 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:34.018 11:45:20 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:34.018 11:45:20 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:34.018 11:45:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.018 11:45:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:34.018 11:45:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.600 11:45:22 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:36.600 00:39:36.600 real 0m23.841s 00:39:36.600 user 0m34.518s 00:39:36.600 sys 0m5.182s 00:39:36.600 11:45:22 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:36.600 11:45:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:36.600 ************************************ 00:39:36.600 END TEST nvmf_identify_passthru 00:39:36.600 ************************************ 00:39:36.600 11:45:22 -- common/autotest_common.sh@1142 -- # return 0 00:39:36.600 11:45:22 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:36.600 11:45:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:36.600 11:45:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:36.600 11:45:22 -- common/autotest_common.sh@10 -- # set +x 00:39:36.600 ************************************ 00:39:36.600 START TEST nvmf_dif 00:39:36.600 ************************************ 00:39:36.600 11:45:22 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:36.600 * Looking for test storage... 00:39:36.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:36.600 11:45:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.600 11:45:22 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.600 11:45:22 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.600 11:45:22 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.600 11:45:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.600 11:45:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.600 11:45:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.600 11:45:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:36.600 11:45:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:36.600 11:45:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:36.600 11:45:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:36.600 11:45:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:36.600 11:45:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:36.600 11:45:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.600 11:45:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:36.600 11:45:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:36.600 11:45:22 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:39:36.600 11:45:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:41.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:41.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.874 11:45:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:41.874 Found net devices under 0000:86:00.0: cvl_0_0 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:41.875 Found net devices under 0000:86:00.1: cvl_0_1 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:41.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:41.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:39:41.875 00:39:41.875 --- 10.0.0.2 ping statistics --- 00:39:41.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.875 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:41.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:41.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:39:41.875 00:39:41.875 --- 10.0.0.1 ping statistics --- 00:39:41.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.875 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:41.875 11:45:27 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:44.409 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:39:44.409 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:39:44.409 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:44.409 11:45:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:44.409 11:45:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1197693 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:44.409 11:45:30 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1197693 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1197693 ']' 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:44.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:44.409 11:45:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:44.409 [2024-07-12 11:45:30.523887] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:44.409 [2024-07-12 11:45:30.523980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:44.409 EAL: No free 2048 kB hugepages reported on node 1 00:39:44.409 [2024-07-12 11:45:30.631322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.668 [2024-07-12 11:45:30.836474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:44.668 [2024-07-12 11:45:30.836520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:44.668 [2024-07-12 11:45:30.836532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:44.668 [2024-07-12 11:45:30.836559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:44.668 [2024-07-12 11:45:30.836569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:44.668 [2024-07-12 11:45:30.836607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:39:45.236 11:45:31 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:45.236 11:45:31 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.236 11:45:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:45.236 11:45:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:45.236 [2024-07-12 11:45:31.332541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.236 11:45:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:45.236 11:45:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:45.236 ************************************ 00:39:45.236 START TEST fio_dif_1_default 00:39:45.236 ************************************ 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.236 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:45.236 bdev_null0 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:45.237 [2024-07-12 11:45:31.388809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:45.237 { 00:39:45.237 "params": { 00:39:45.237 "name": "Nvme$subsystem", 00:39:45.237 "trtype": "$TEST_TRANSPORT", 00:39:45.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:45.237 "adrfam": "ipv4", 00:39:45.237 "trsvcid": "$NVMF_PORT", 00:39:45.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:45.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:45.237 "hdgst": ${hdgst:-false}, 00:39:45.237 "ddgst": ${ddgst:-false} 00:39:45.237 }, 00:39:45.237 "method": "bdev_nvme_attach_controller" 00:39:45.237 } 00:39:45.237 EOF 00:39:45.237 )") 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:45.237 "params": { 00:39:45.237 "name": "Nvme0", 00:39:45.237 "trtype": "tcp", 00:39:45.237 "traddr": "10.0.0.2", 00:39:45.237 "adrfam": "ipv4", 00:39:45.237 "trsvcid": "4420", 00:39:45.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:45.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:45.237 "hdgst": false, 00:39:45.237 "ddgst": false 00:39:45.237 }, 00:39:45.237 "method": "bdev_nvme_attach_controller" 00:39:45.237 }' 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:45.237 11:45:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:45.496 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:45.496 fio-3.35 00:39:45.496 Starting 1 thread 00:39:45.496 EAL: No free 2048 kB hugepages reported on node 1 00:39:57.708 00:39:57.708 filename0: (groupid=0, jobs=1): err= 0: pid=1198068: Fri Jul 12 11:45:42 2024 00:39:57.708 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10008msec) 00:39:57.708 slat (nsec): min=6987, max=37586, avg=8709.77, stdev=2631.53 00:39:57.708 clat (usec): min=523, max=42437, avg=21088.34, stdev=20358.08 00:39:57.708 lat (usec): min=530, max=42446, avg=21097.05, stdev=20358.06 00:39:57.708 clat percentiles (usec): 00:39:57.708 | 1.00th=[ 537], 5.00th=[ 545], 10.00th=[ 603], 20.00th=[ 668], 00:39:57.708 | 30.00th=[ 676], 40.00th=[ 685], 50.00th=[41157], 60.00th=[41157], 00:39:57.708 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:39:57.708 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:57.708 | 99.99th=[42206] 00:39:57.708 bw ( KiB/s): min= 672, max= 768, per=99.76%, avg=756.80, stdev=28.00, samples=20 00:39:57.708 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:39:57.708 lat (usec) : 750=49.79% 00:39:57.708 lat (msec) : 50=50.21% 00:39:57.708 cpu : usr=94.86%, sys=4.84%, ctx=15, majf=0, minf=1634 00:39:57.708 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:57.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:57.708 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:57.708 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:57.708 00:39:57.708 Run status group 0 (all jobs): 00:39:57.708 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10008-10008msec 00:39:57.708 ----------------------------------------------------- 00:39:57.708 Suppressions used: 00:39:57.708 count bytes template 00:39:57.708 1 8 /usr/src/fio/parse.c 00:39:57.708 1 8 libtcmalloc_minimal.so 00:39:57.708 1 904 libcrypto.so 00:39:57.708 ----------------------------------------------------- 00:39:57.708 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.708 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 00:39:57.709 real 0m12.489s 00:39:57.709 user 0m17.079s 00:39:57.709 sys 0m0.950s 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 ************************************ 00:39:57.709 END TEST fio_dif_1_default 00:39:57.709 ************************************ 00:39:57.709 11:45:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:57.709 11:45:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:57.709 11:45:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:57.709 11:45:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 ************************************ 00:39:57.709 START TEST fio_dif_1_multi_subsystems 00:39:57.709 ************************************ 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 bdev_null0 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 [2024-07-12 11:45:43.939035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 bdev_null1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:57.709 { 00:39:57.709 "params": { 00:39:57.709 "name": "Nvme$subsystem", 00:39:57.709 "trtype": "$TEST_TRANSPORT", 00:39:57.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:57.709 "adrfam": "ipv4", 00:39:57.709 "trsvcid": "$NVMF_PORT", 00:39:57.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:57.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:57.709 "hdgst": ${hdgst:-false}, 00:39:57.709 "ddgst": ${ddgst:-false} 00:39:57.709 }, 00:39:57.709 "method": "bdev_nvme_attach_controller" 00:39:57.709 } 00:39:57.709 EOF 00:39:57.709 )") 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:57.709 { 00:39:57.709 "params": { 00:39:57.709 "name": "Nvme$subsystem", 00:39:57.709 "trtype": "$TEST_TRANSPORT", 00:39:57.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:57.709 "adrfam": "ipv4", 00:39:57.709 "trsvcid": "$NVMF_PORT", 00:39:57.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:57.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:57.709 "hdgst": ${hdgst:-false}, 00:39:57.709 "ddgst": ${ddgst:-false} 00:39:57.709 }, 00:39:57.709 "method": "bdev_nvme_attach_controller" 00:39:57.709 } 00:39:57.709 EOF 00:39:57.709 )") 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:57.709 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:57.710 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:39:57.710 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:39:57.710 11:45:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:57.710 "params": { 00:39:57.710 "name": "Nvme0", 00:39:57.710 "trtype": "tcp", 00:39:57.710 "traddr": "10.0.0.2", 00:39:57.710 "adrfam": "ipv4", 00:39:57.710 "trsvcid": "4420", 00:39:57.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:57.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:57.710 "hdgst": false, 00:39:57.710 "ddgst": false 00:39:57.710 }, 00:39:57.710 "method": "bdev_nvme_attach_controller" 00:39:57.710 },{ 00:39:57.710 "params": { 00:39:57.710 "name": "Nvme1", 00:39:57.710 "trtype": "tcp", 00:39:57.710 "traddr": "10.0.0.2", 00:39:57.710 "adrfam": "ipv4", 00:39:57.710 "trsvcid": "4420", 00:39:57.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:57.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:57.710 "hdgst": false, 00:39:57.710 "ddgst": false 00:39:57.710 }, 00:39:57.710 "method": "bdev_nvme_attach_controller" 00:39:57.710 }' 00:39:57.710 11:45:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:57.710 11:45:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:57.710 11:45:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:39:57.710 11:45:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:57.710 11:45:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:58.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:58.276 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:58.276 fio-3.35 00:39:58.276 Starting 2 threads 00:39:58.276 EAL: No free 2048 kB hugepages reported on node 1 00:40:10.503 00:40:10.503 filename0: (groupid=0, jobs=1): err= 0: pid=1200249: Fri Jul 12 11:45:55 2024 00:40:10.503 read: IOPS=188, BW=754KiB/s (772kB/s)(7552KiB/10021msec) 00:40:10.503 slat (nsec): min=3486, max=21210, avg=8510.43, stdev=2043.18 00:40:10.503 clat (usec): min=464, max=45388, avg=21205.02, stdev=20585.03 00:40:10.503 lat (usec): min=471, max=45403, avg=21213.53, stdev=20584.42 00:40:10.503 clat percentiles (usec): 00:40:10.503 | 1.00th=[ 478], 5.00th=[ 486], 10.00th=[ 494], 20.00th=[ 506], 00:40:10.503 | 30.00th=[ 529], 40.00th=[ 603], 50.00th=[41157], 60.00th=[41157], 00:40:10.503 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:40:10.503 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:40:10.503 | 99.99th=[45351] 00:40:10.503 bw ( KiB/s): min= 672, max= 768, per=65.89%, avg=753.60, stdev=30.22, samples=20 00:40:10.503 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:40:10.503 lat (usec) : 500=15.89%, 750=33.26%, 1000=0.64% 00:40:10.503 lat (msec) : 50=50.21% 00:40:10.503 cpu : usr=97.53%, sys=2.18%, ctx=15, majf=0, minf=1634 00:40:10.503 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.503 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.503 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:10.503 filename1: (groupid=0, jobs=1): err= 0: pid=1200250: Fri Jul 12 11:45:55 2024 00:40:10.503 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 00:40:10.503 slat (nsec): min=5174, max=30686, avg=9258.11, stdev=2865.49 00:40:10.503 clat (usec): min=40753, max=46231, avg=41051.04, stdev=408.60 00:40:10.503 lat (usec): min=40760, max=46249, avg=41060.30, stdev=408.75 00:40:10.503 clat percentiles (usec): 00:40:10.503 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:10.503 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:10.503 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:10.503 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:40:10.503 | 99.99th=[46400] 00:40:10.503 bw ( KiB/s): min= 384, max= 416, per=33.95%, avg=388.80, stdev=11.72, samples=20 00:40:10.503 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:40:10.503 lat (msec) : 50=100.00% 00:40:10.503 cpu : usr=97.80%, sys=1.93%, ctx=13, majf=0, minf=1633 00:40:10.503 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.503 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.503 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:10.503 00:40:10.503 Run status group 0 (all jobs): 00:40:10.503 READ: bw=1143KiB/s (1170kB/s), 389KiB/s-754KiB/s (399kB/s-772kB/s), io=11.2MiB (11.7MB), run=10021-10024msec 00:40:10.503 ----------------------------------------------------- 00:40:10.503 Suppressions used: 00:40:10.503 count bytes template 00:40:10.503 2 16 /usr/src/fio/parse.c 00:40:10.503 1 8 libtcmalloc_minimal.so 00:40:10.503 1 904 libcrypto.so 00:40:10.503 ----------------------------------------------------- 00:40:10.503 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 00:40:10.503 real 0m12.527s 00:40:10.503 user 0m27.593s 00:40:10.503 sys 0m0.910s 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 ************************************ 00:40:10.503 END TEST fio_dif_1_multi_subsystems 00:40:10.503 ************************************ 00:40:10.503 11:45:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:10.503 11:45:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:10.503 11:45:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:10.503 11:45:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 ************************************ 00:40:10.503 START TEST fio_dif_rand_params 00:40:10.503 ************************************ 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 bdev_null0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:10.503 [2024-07-12 11:45:56.537190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:10.503 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:10.504 { 00:40:10.504 "params": { 00:40:10.504 "name": "Nvme$subsystem", 00:40:10.504 "trtype": "$TEST_TRANSPORT", 00:40:10.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:10.504 "adrfam": "ipv4", 00:40:10.504 "trsvcid": "$NVMF_PORT", 00:40:10.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:10.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:10.504 "hdgst": ${hdgst:-false}, 00:40:10.504 "ddgst": ${ddgst:-false} 00:40:10.504 }, 00:40:10.504 "method": "bdev_nvme_attach_controller" 00:40:10.504 } 00:40:10.504 EOF 00:40:10.504 )") 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:10.504 "params": { 00:40:10.504 "name": "Nvme0", 00:40:10.504 "trtype": "tcp", 00:40:10.504 "traddr": "10.0.0.2", 00:40:10.504 "adrfam": "ipv4", 00:40:10.504 "trsvcid": "4420", 00:40:10.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:10.504 "hdgst": false, 00:40:10.504 "ddgst": false 00:40:10.504 }, 00:40:10.504 "method": "bdev_nvme_attach_controller" 00:40:10.504 }' 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:10.504 11:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:10.761 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:10.761 ... 00:40:10.761 fio-3.35 00:40:10.761 Starting 3 threads 00:40:10.761 EAL: No free 2048 kB hugepages reported on node 1 00:40:17.399 00:40:17.399 filename0: (groupid=0, jobs=1): err= 0: pid=1202299: Fri Jul 12 11:46:02 2024 00:40:17.399 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(172MiB/5048msec) 00:40:17.399 slat (nsec): min=7518, max=90252, avg=16000.38, stdev=5196.55 00:40:17.399 clat (usec): min=4732, max=53763, avg=10985.10, stdev=5583.48 00:40:17.399 lat (usec): min=4743, max=53776, avg=11001.10, stdev=5583.34 00:40:17.399 clat percentiles (usec): 00:40:17.399 | 1.00th=[ 6456], 5.00th=[ 7635], 10.00th=[ 8356], 20.00th=[ 9241], 00:40:17.399 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:40:17.399 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12125], 95.00th=[13042], 00:40:17.399 | 99.00th=[49546], 99.50th=[50070], 99.90th=[52167], 99.95th=[53740], 00:40:17.399 | 99.99th=[53740] 00:40:17.399 bw ( KiB/s): min=27648, max=39168, per=34.17%, avg=35072.00, stdev=3652.43, samples=10 00:40:17.399 iops : min= 216, max= 306, avg=274.00, stdev=28.53, samples=10 00:40:17.399 lat (msec) : 10=40.67%, 20=57.43%, 50=1.24%, 100=0.66% 00:40:17.399 cpu : usr=95.52%, sys=4.12%, ctx=13, majf=0, minf=1634 00:40:17.399 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.400 issued rwts: total=1372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.400 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:17.400 filename0: (groupid=0, jobs=1): err= 0: pid=1202300: Fri Jul 12 11:46:02 2024 00:40:17.400 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(168MiB/5045msec) 00:40:17.400 slat (nsec): min=7884, max=67488, avg=15427.37, stdev=4357.37 00:40:17.400 clat (usec): min=3665, max=51972, avg=11245.05, stdev=4783.28 00:40:17.400 lat (usec): min=3675, max=51980, avg=11260.48, stdev=4783.58 00:40:17.400 clat percentiles (usec): 00:40:17.400 | 1.00th=[ 4146], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 9241], 00:40:17.400 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11600], 00:40:17.400 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13435], 95.00th=[13960], 00:40:17.400 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50594], 99.95th=[52167], 00:40:17.400 | 99.99th=[52167] 00:40:17.400 bw ( KiB/s): min=27904, max=41216, per=33.37%, avg=34252.80, stdev=3867.02, samples=10 00:40:17.400 iops : min= 218, max= 322, avg=267.60, stdev=30.21, samples=10 00:40:17.400 lat (msec) : 4=0.52%, 10=30.45%, 20=67.76%, 50=1.04%, 100=0.22% 00:40:17.400 cpu : usr=95.54%, sys=4.10%, ctx=11, majf=0, minf=1635 00:40:17.400 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.400 issued rwts: total=1340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.400 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:17.400 filename0: (groupid=0, jobs=1): err= 0: pid=1202301: Fri Jul 12 11:46:02 2024 00:40:17.400 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(167MiB/5045msec) 00:40:17.400 slat (nsec): min=7668, max=46576, avg=17004.25, stdev=6867.76 00:40:17.400 clat (usec): min=3994, max=91107, avg=11276.94, stdev=5849.64 00:40:17.400 lat (usec): min=4007, max=91135, avg=11293.94, stdev=5850.20 00:40:17.400 clat percentiles (usec): 00:40:17.400 | 1.00th=[ 4424], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[ 9372], 00:40:17.400 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11207], 00:40:17.400 | 70.00th=[11600], 80.00th=[12256], 90.00th=[13042], 95.00th=[13698], 00:40:17.400 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53740], 99.95th=[90702], 00:40:17.400 | 99.99th=[90702] 00:40:17.400 bw ( KiB/s): min=29952, max=44544, per=33.28%, avg=34157.00, stdev=4011.75, samples=10 00:40:17.400 iops : min= 234, max= 348, avg=266.80, stdev=31.36, samples=10 00:40:17.400 lat (msec) : 4=0.07%, 10=32.63%, 20=65.64%, 50=0.37%, 100=1.27% 00:40:17.400 cpu : usr=95.28%, sys=4.38%, ctx=8, majf=0, minf=1637 00:40:17.400 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.400 issued rwts: total=1336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.400 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:17.400 00:40:17.400 Run status group 0 (all jobs): 00:40:17.400 READ: bw=100MiB/s (105MB/s), 33.1MiB/s-34.0MiB/s (34.7MB/s-35.6MB/s), io=506MiB (531MB), run=5045-5048msec 00:40:17.658 ----------------------------------------------------- 00:40:17.658 Suppressions used: 00:40:17.658 count bytes template 00:40:17.658 5 44 /usr/src/fio/parse.c 00:40:17.658 1 8 libtcmalloc_minimal.so 00:40:17.658 1 904 libcrypto.so 00:40:17.658 ----------------------------------------------------- 00:40:17.659 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 bdev_null0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 [2024-07-12 11:46:03.978664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 bdev_null1 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.659 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.917 bdev_null2 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:17.917 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:17.918 { 00:40:17.918 "params": { 00:40:17.918 "name": "Nvme$subsystem", 00:40:17.918 "trtype": "$TEST_TRANSPORT", 00:40:17.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:17.918 "adrfam": "ipv4", 00:40:17.918 "trsvcid": "$NVMF_PORT", 00:40:17.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:17.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:17.918 "hdgst": ${hdgst:-false}, 00:40:17.918 "ddgst": ${ddgst:-false} 00:40:17.918 }, 00:40:17.918 "method": "bdev_nvme_attach_controller" 00:40:17.918 } 00:40:17.918 EOF 00:40:17.918 )") 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:17.918 { 00:40:17.918 "params": { 00:40:17.918 "name": "Nvme$subsystem", 00:40:17.918 "trtype": "$TEST_TRANSPORT", 00:40:17.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:17.918 "adrfam": "ipv4", 00:40:17.918 "trsvcid": "$NVMF_PORT", 00:40:17.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:17.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:17.918 "hdgst": ${hdgst:-false}, 00:40:17.918 "ddgst": ${ddgst:-false} 00:40:17.918 }, 00:40:17.918 "method": "bdev_nvme_attach_controller" 00:40:17.918 } 00:40:17.918 EOF 00:40:17.918 )") 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:17.918 { 00:40:17.918 "params": { 00:40:17.918 "name": "Nvme$subsystem", 00:40:17.918 "trtype": "$TEST_TRANSPORT", 00:40:17.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:17.918 "adrfam": "ipv4", 00:40:17.918 "trsvcid": "$NVMF_PORT", 00:40:17.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:17.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:17.918 "hdgst": ${hdgst:-false}, 00:40:17.918 "ddgst": ${ddgst:-false} 00:40:17.918 }, 00:40:17.918 "method": "bdev_nvme_attach_controller" 00:40:17.918 } 00:40:17.918 EOF 00:40:17.918 )") 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:17.918 "params": { 00:40:17.918 "name": "Nvme0", 00:40:17.918 "trtype": "tcp", 00:40:17.918 "traddr": "10.0.0.2", 00:40:17.918 "adrfam": "ipv4", 00:40:17.918 "trsvcid": "4420", 00:40:17.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:17.918 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:17.918 "hdgst": false, 00:40:17.918 "ddgst": false 00:40:17.918 }, 00:40:17.918 "method": "bdev_nvme_attach_controller" 00:40:17.918 },{ 00:40:17.918 "params": { 00:40:17.918 "name": "Nvme1", 00:40:17.918 "trtype": "tcp", 00:40:17.918 "traddr": "10.0.0.2", 00:40:17.918 "adrfam": "ipv4", 00:40:17.918 "trsvcid": "4420", 00:40:17.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:17.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:17.918 "hdgst": false, 00:40:17.918 "ddgst": false 00:40:17.918 }, 00:40:17.918 "method": "bdev_nvme_attach_controller" 00:40:17.918 },{ 00:40:17.918 "params": { 00:40:17.918 "name": "Nvme2", 00:40:17.918 "trtype": "tcp", 00:40:17.918 "traddr": "10.0.0.2", 00:40:17.918 "adrfam": "ipv4", 00:40:17.918 "trsvcid": "4420", 00:40:17.918 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:17.918 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:17.918 "hdgst": false, 00:40:17.918 "ddgst": false 00:40:17.918 }, 00:40:17.918 "method": "bdev_nvme_attach_controller" 00:40:17.918 }' 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:17.918 11:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:18.176 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:18.176 ... 00:40:18.176 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:18.176 ... 00:40:18.176 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:18.176 ... 00:40:18.176 fio-3.35 00:40:18.176 Starting 24 threads 00:40:18.176 EAL: No free 2048 kB hugepages reported on node 1 00:40:30.369 00:40:30.369 filename0: (groupid=0, jobs=1): err= 0: pid=1203695: Fri Jul 12 11:46:15 2024 00:40:30.369 read: IOPS=486, BW=1946KiB/s (1992kB/s)(19.0MiB/10012msec) 00:40:30.369 slat (usec): min=7, max=166, avg=18.68, stdev= 7.58 00:40:30.369 clat (usec): min=12667, max=59264, avg=32727.14, stdev=2653.69 00:40:30.369 lat (usec): min=12677, max=59293, avg=32745.82, stdev=2654.15 00:40:30.369 clat percentiles (usec): 00:40:30.369 | 1.00th=[19792], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.369 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.369 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:40:30.369 | 99.00th=[34341], 99.50th=[36963], 99.90th=[58983], 99.95th=[58983], 00:40:30.369 | 99.99th=[59507] 00:40:30.369 bw ( KiB/s): min= 1792, max= 2224, per=4.20%, avg=1941.20, stdev=83.55, samples=20 00:40:30.369 iops : min= 448, max= 556, avg=485.30, stdev=20.89, samples=20 00:40:30.369 lat (msec) : 20=1.40%, 50=98.28%, 100=0.33% 00:40:30.369 cpu : usr=98.87%, sys=0.74%, ctx=16, majf=0, minf=1634 00:40:30.370 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename0: (groupid=0, jobs=1): err= 0: pid=1203696: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:40:30.370 slat (nsec): min=9394, max=83638, avg=25689.29, stdev=7469.50 00:40:30.370 clat (usec): min=24033, max=68582, avg=33037.28, stdev=2151.38 00:40:30.370 lat (usec): min=24049, max=68621, avg=33062.97, stdev=2150.87 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.370 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.370 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.370 | 99.00th=[34341], 99.50th=[34341], 99.90th=[68682], 99.95th=[68682], 00:40:30.370 | 99.99th=[68682] 00:40:30.370 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=83.06, samples=20 00:40:30.370 iops : min= 416, max= 512, avg=480.00, stdev=20.76, samples=20 00:40:30.370 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.370 cpu : usr=98.62%, sys=1.00%, ctx=14, majf=0, minf=1633 00:40:30.370 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename0: (groupid=0, jobs=1): err= 0: pid=1203697: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:40:30.370 slat (nsec): min=9418, max=84808, avg=27270.80, stdev=8518.51 00:40:30.370 clat (usec): min=23992, max=68403, avg=33023.02, stdev=2140.36 00:40:30.370 lat (usec): min=24008, max=68451, avg=33050.29, stdev=2140.25 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.370 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.370 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.370 | 99.00th=[34341], 99.50th=[34341], 99.90th=[68682], 99.95th=[68682], 00:40:30.370 | 99.99th=[68682] 00:40:30.370 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=83.06, samples=20 00:40:30.370 iops : min= 416, max= 512, avg=480.00, stdev=20.76, samples=20 00:40:30.370 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.370 cpu : usr=98.62%, sys=0.99%, ctx=16, majf=0, minf=1634 00:40:30.370 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename0: (groupid=0, jobs=1): err= 0: pid=1203698: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10008msec) 00:40:30.370 slat (nsec): min=3880, max=97013, avg=23532.61, stdev=11439.38 00:40:30.370 clat (usec): min=18252, max=74199, avg=32947.33, stdev=2981.74 00:40:30.370 lat (usec): min=18269, max=74215, avg=32970.87, stdev=2981.30 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[22152], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.370 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.370 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.370 | 99.00th=[34866], 99.50th=[50070], 99.90th=[73925], 99.95th=[73925], 00:40:30.370 | 99.99th=[73925] 00:40:30.370 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1925.05, stdev=75.43, samples=19 00:40:30.370 iops : min= 416, max= 512, avg=481.26, stdev=18.86, samples=19 00:40:30.370 lat (msec) : 20=0.08%, 50=99.38%, 100=0.54% 00:40:30.370 cpu : usr=98.79%, sys=0.83%, ctx=13, majf=0, minf=1635 00:40:30.370 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename0: (groupid=0, jobs=1): err= 0: pid=1203699: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10007msec) 00:40:30.370 slat (nsec): min=4344, max=90226, avg=21972.31, stdev=9267.50 00:40:30.370 clat (usec): min=28357, max=89876, avg=33174.72, stdev=3047.49 00:40:30.370 lat (usec): min=28369, max=89894, avg=33196.69, stdev=3046.43 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:40:30.370 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.370 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.370 | 99.00th=[34341], 99.50th=[34866], 99.90th=[84411], 99.95th=[84411], 00:40:30.370 | 99.99th=[89654] 00:40:30.370 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1913.26, stdev=67.11, samples=19 00:40:30.370 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:40:30.370 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.370 cpu : usr=98.64%, sys=0.99%, ctx=14, majf=0, minf=1633 00:40:30.370 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename0: (groupid=0, jobs=1): err= 0: pid=1203700: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10027msec) 00:40:30.370 slat (nsec): min=3641, max=83958, avg=25785.92, stdev=8296.30 00:40:30.370 clat (usec): min=19637, max=82884, avg=33088.84, stdev=2861.43 00:40:30.370 lat (usec): min=19648, max=82898, avg=33114.62, stdev=2860.23 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.370 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.370 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.370 | 99.00th=[34341], 99.50th=[34866], 99.90th=[82314], 99.95th=[83362], 00:40:30.370 | 99.99th=[83362] 00:40:30.370 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=71.93, samples=20 00:40:30.370 iops : min= 416, max= 512, avg=480.00, stdev=17.98, samples=20 00:40:30.370 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:40:30.370 cpu : usr=98.77%, sys=0.84%, ctx=19, majf=0, minf=1634 00:40:30.370 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename0: (groupid=0, jobs=1): err= 0: pid=1203701: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=483, BW=1933KiB/s (1979kB/s)(18.9MiB/10032msec) 00:40:30.370 slat (nsec): min=4710, max=75672, avg=14407.80, stdev=6855.91 00:40:30.370 clat (usec): min=18284, max=48391, avg=32982.41, stdev=1334.71 00:40:30.370 lat (usec): min=18294, max=48405, avg=32996.82, stdev=1334.64 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[25560], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:40:30.370 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:30.370 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:30.370 | 99.00th=[34341], 99.50th=[34866], 99.90th=[39584], 99.95th=[47973], 00:40:30.370 | 99.99th=[48497] 00:40:30.370 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1932.80, stdev=39.40, samples=20 00:40:30.370 iops : min= 480, max= 512, avg=483.20, stdev= 9.85, samples=20 00:40:30.370 lat (msec) : 20=0.41%, 50=99.59% 00:40:30.370 cpu : usr=98.66%, sys=0.96%, ctx=17, majf=0, minf=1637 00:40:30.370 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename0: (groupid=0, jobs=1): err= 0: pid=1203702: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10006msec) 00:40:30.370 slat (nsec): min=5240, max=92954, avg=41228.25, stdev=14671.85 00:40:30.370 clat (usec): min=17979, max=47546, avg=32755.94, stdev=1384.98 00:40:30.370 lat (usec): min=17988, max=47556, avg=32797.17, stdev=1385.83 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[28181], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:40:30.370 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:40:30.370 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33424], 00:40:30.370 | 99.00th=[34341], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:40:30.370 | 99.99th=[47449] 00:40:30.370 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1926.74, stdev=29.37, samples=19 00:40:30.370 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:40:30.370 lat (msec) : 20=0.33%, 50=99.67% 00:40:30.370 cpu : usr=98.75%, sys=0.83%, ctx=15, majf=0, minf=1636 00:40:30.370 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:30.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.370 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.370 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.370 filename1: (groupid=0, jobs=1): err= 0: pid=1203703: Fri Jul 12 11:46:15 2024 00:40:30.370 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10019msec) 00:40:30.370 slat (nsec): min=4601, max=75839, avg=17097.86, stdev=7133.15 00:40:30.370 clat (usec): min=5550, max=61420, avg=32759.57, stdev=2846.84 00:40:30.370 lat (usec): min=5560, max=61429, avg=32776.66, stdev=2847.06 00:40:30.370 clat percentiles (usec): 00:40:30.370 | 1.00th=[18220], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.370 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.370 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:30.370 | 99.00th=[34341], 99.50th=[40109], 99.90th=[52691], 99.95th=[52691], 00:40:30.370 | 99.99th=[61604] 00:40:30.370 bw ( KiB/s): min= 1920, max= 2224, per=4.20%, avg=1941.60, stdev=72.35, samples=20 00:40:30.370 iops : min= 480, max= 556, avg=485.40, stdev=18.09, samples=20 00:40:30.370 lat (msec) : 10=0.80%, 20=0.57%, 50=98.48%, 100=0.14% 00:40:30.370 cpu : usr=98.76%, sys=0.84%, ctx=15, majf=0, minf=1633 00:40:30.370 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename1: (groupid=0, jobs=1): err= 0: pid=1203704: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=480, BW=1920KiB/s (1966kB/s)(18.8MiB/10012msec) 00:40:30.371 slat (nsec): min=6972, max=95791, avg=27029.38, stdev=11344.13 00:40:30.371 clat (usec): min=18590, max=78590, avg=33088.83, stdev=2928.38 00:40:30.371 lat (usec): min=18607, max=78616, avg=33115.86, stdev=2927.26 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[29754], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.371 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.371 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.371 | 99.00th=[34866], 99.50th=[47973], 99.90th=[78119], 99.95th=[78119], 00:40:30.371 | 99.99th=[78119] 00:40:30.371 bw ( KiB/s): min= 1712, max= 2048, per=4.14%, avg=1915.79, stdev=57.40, samples=19 00:40:30.371 iops : min= 428, max= 512, avg=478.95, stdev=14.35, samples=19 00:40:30.371 lat (msec) : 20=0.15%, 50=99.35%, 100=0.50% 00:40:30.371 cpu : usr=98.71%, sys=0.91%, ctx=10, majf=0, minf=1632 00:40:30.371 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename1: (groupid=0, jobs=1): err= 0: pid=1203705: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10006msec) 00:40:30.371 slat (usec): min=4, max=158, avg=26.19, stdev=11.13 00:40:30.371 clat (usec): min=19907, max=87573, avg=32911.89, stdev=3606.43 00:40:30.371 lat (usec): min=19916, max=87589, avg=32938.07, stdev=3606.24 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[23725], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.371 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.371 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.371 | 99.00th=[34866], 99.50th=[45351], 99.90th=[87557], 99.95th=[87557], 00:40:30.371 | 99.99th=[87557] 00:40:30.371 bw ( KiB/s): min= 1664, max= 2176, per=4.16%, avg=1925.05, stdev=86.67, samples=19 00:40:30.371 iops : min= 416, max= 544, avg=481.26, stdev=21.67, samples=19 00:40:30.371 lat (msec) : 20=0.31%, 50=99.36%, 100=0.33% 00:40:30.371 cpu : usr=98.77%, sys=0.84%, ctx=13, majf=0, minf=1632 00:40:30.371 IO depths : 1=5.9%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename1: (groupid=0, jobs=1): err= 0: pid=1203706: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10020msec) 00:40:30.371 slat (nsec): min=3779, max=83981, avg=26589.19, stdev=8821.13 00:40:30.371 clat (usec): min=20682, max=76226, avg=33058.47, stdev=2287.00 00:40:30.371 lat (usec): min=20692, max=76245, avg=33085.06, stdev=2285.63 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.371 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.371 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.371 | 99.00th=[34341], 99.50th=[34341], 99.90th=[69731], 99.95th=[69731], 00:40:30.371 | 99.99th=[76022] 00:40:30.371 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1920.00, stdev=42.67, samples=19 00:40:30.371 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:40:30.371 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.371 cpu : usr=98.81%, sys=0.79%, ctx=14, majf=0, minf=1635 00:40:30.371 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename1: (groupid=0, jobs=1): err= 0: pid=1203707: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10009msec) 00:40:30.371 slat (nsec): min=5166, max=82521, avg=25950.41, stdev=7608.46 00:40:30.371 clat (usec): min=24019, max=61690, avg=33013.20, stdev=1781.17 00:40:30.371 lat (usec): min=24047, max=61716, avg=33039.15, stdev=1780.36 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.371 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.371 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.371 | 99.00th=[34341], 99.50th=[34341], 99.90th=[61604], 99.95th=[61604], 00:40:30.371 | 99.99th=[61604] 00:40:30.371 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1920.00, stdev=42.67, samples=19 00:40:30.371 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:40:30.371 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.371 cpu : usr=98.76%, sys=0.85%, ctx=14, majf=0, minf=1634 00:40:30.371 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename1: (groupid=0, jobs=1): err= 0: pid=1203708: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=483, BW=1936KiB/s (1982kB/s)(18.9MiB/10017msec) 00:40:30.371 slat (nsec): min=4580, max=75558, avg=15553.88, stdev=8120.70 00:40:30.371 clat (usec): min=16590, max=46821, avg=32920.12, stdev=1699.46 00:40:30.371 lat (usec): min=16597, max=46838, avg=32935.67, stdev=1699.63 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[22938], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:40:30.371 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:30.371 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:40:30.371 | 99.00th=[34341], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:40:30.371 | 99.99th=[46924] 00:40:30.371 bw ( KiB/s): min= 1893, max= 2048, per=4.18%, avg=1931.45, stdev=40.31, samples=20 00:40:30.371 iops : min= 473, max= 512, avg=482.85, stdev=10.09, samples=20 00:40:30.371 lat (msec) : 20=0.70%, 50=99.30% 00:40:30.371 cpu : usr=98.77%, sys=0.85%, ctx=13, majf=0, minf=1634 00:40:30.371 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename1: (groupid=0, jobs=1): err= 0: pid=1203709: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10011msec) 00:40:30.371 slat (nsec): min=3738, max=95091, avg=23870.90, stdev=10474.94 00:40:30.371 clat (usec): min=26572, max=87593, avg=33175.74, stdev=3179.25 00:40:30.371 lat (usec): min=26582, max=87608, avg=33199.61, stdev=3177.94 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:40:30.371 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.371 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.371 | 99.00th=[34341], 99.50th=[34866], 99.90th=[87557], 99.95th=[87557], 00:40:30.371 | 99.99th=[87557] 00:40:30.371 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1913.26, stdev=67.11, samples=19 00:40:30.371 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:40:30.371 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.371 cpu : usr=98.67%, sys=0.94%, ctx=14, majf=0, minf=1636 00:40:30.371 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename1: (groupid=0, jobs=1): err= 0: pid=1203710: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10017msec) 00:40:30.371 slat (nsec): min=7020, max=84676, avg=27249.12, stdev=8667.66 00:40:30.371 clat (usec): min=21301, max=70345, avg=33038.18, stdev=2346.04 00:40:30.371 lat (usec): min=21311, max=70377, avg=33065.43, stdev=2345.24 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.371 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.371 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.371 | 99.00th=[34341], 99.50th=[34866], 99.90th=[70779], 99.95th=[70779], 00:40:30.371 | 99.99th=[70779] 00:40:30.371 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1919.60, stdev=71.95, samples=20 00:40:30.371 iops : min= 416, max= 512, avg=479.90, stdev=17.99, samples=20 00:40:30.371 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.371 cpu : usr=98.84%, sys=0.78%, ctx=12, majf=0, minf=1631 00:40:30.371 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:30.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.371 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.371 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.371 filename2: (groupid=0, jobs=1): err= 0: pid=1203711: Fri Jul 12 11:46:15 2024 00:40:30.371 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10001msec) 00:40:30.371 slat (nsec): min=5068, max=53773, avg=15016.13, stdev=7156.14 00:40:30.371 clat (usec): min=19020, max=61694, avg=33098.50, stdev=2088.71 00:40:30.371 lat (usec): min=19037, max=61714, avg=33113.51, stdev=2087.78 00:40:30.371 clat percentiles (usec): 00:40:30.371 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:40:30.371 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:40:30.371 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:40:30.371 | 99.00th=[34341], 99.50th=[44827], 99.90th=[61604], 99.95th=[61604], 00:40:30.371 | 99.99th=[61604] 00:40:30.371 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1926.74, stdev=51.80, samples=19 00:40:30.371 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:40:30.371 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:40:30.371 cpu : usr=98.57%, sys=1.00%, ctx=16, majf=0, minf=1639 00:40:30.371 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 filename2: (groupid=0, jobs=1): err= 0: pid=1203712: Fri Jul 12 11:46:15 2024 00:40:30.372 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10014msec) 00:40:30.372 slat (nsec): min=8365, max=94059, avg=26411.27, stdev=10804.72 00:40:30.372 clat (usec): min=27781, max=62727, avg=33042.87, stdev=1787.00 00:40:30.372 lat (usec): min=27799, max=62753, avg=33069.28, stdev=1785.89 00:40:30.372 clat percentiles (usec): 00:40:30.372 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.372 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.372 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.372 | 99.00th=[34341], 99.50th=[34866], 99.90th=[62653], 99.95th=[62653], 00:40:30.372 | 99.99th=[62653] 00:40:30.372 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1920.00, stdev=41.53, samples=20 00:40:30.372 iops : min= 448, max= 512, avg=480.00, stdev=10.38, samples=20 00:40:30.372 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.372 cpu : usr=98.64%, sys=0.99%, ctx=12, majf=0, minf=1635 00:40:30.372 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 filename2: (groupid=0, jobs=1): err= 0: pid=1203713: Fri Jul 12 11:46:15 2024 00:40:30.372 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10004msec) 00:40:30.372 slat (usec): min=3, max=142, avg=25.35, stdev=10.42 00:40:30.372 clat (usec): min=27587, max=85597, avg=33101.18, stdev=3074.67 00:40:30.372 lat (usec): min=27603, max=85611, avg=33126.53, stdev=3073.60 00:40:30.372 clat percentiles (usec): 00:40:30.372 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.372 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.372 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.372 | 99.00th=[34341], 99.50th=[34866], 99.90th=[85459], 99.95th=[85459], 00:40:30.372 | 99.99th=[85459] 00:40:30.372 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1913.26, stdev=67.11, samples=19 00:40:30.372 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:40:30.372 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.372 cpu : usr=98.71%, sys=0.90%, ctx=14, majf=0, minf=1636 00:40:30.372 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 filename2: (groupid=0, jobs=1): err= 0: pid=1203714: Fri Jul 12 11:46:15 2024 00:40:30.372 read: IOPS=483, BW=1933KiB/s (1979kB/s)(18.9MiB/10005msec) 00:40:30.372 slat (nsec): min=6118, max=94865, avg=19894.39, stdev=11000.08 00:40:30.372 clat (usec): min=20425, max=71043, avg=32923.24, stdev=2811.30 00:40:30.372 lat (usec): min=20434, max=71067, avg=32943.13, stdev=2811.13 00:40:30.372 clat percentiles (usec): 00:40:30.372 | 1.00th=[23462], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.372 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.372 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:40:30.372 | 99.00th=[34866], 99.50th=[51119], 99.90th=[70779], 99.95th=[70779], 00:40:30.372 | 99.99th=[70779] 00:40:30.372 bw ( KiB/s): min= 1667, max= 2192, per=4.17%, avg=1927.74, stdev=92.27, samples=19 00:40:30.372 iops : min= 416, max= 548, avg=481.89, stdev=23.18, samples=19 00:40:30.372 lat (msec) : 50=99.46%, 100=0.54% 00:40:30.372 cpu : usr=98.59%, sys=1.05%, ctx=14, majf=0, minf=1636 00:40:30.372 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 filename2: (groupid=0, jobs=1): err= 0: pid=1203715: Fri Jul 12 11:46:15 2024 00:40:30.372 read: IOPS=480, BW=1921KiB/s (1968kB/s)(18.8MiB/10026msec) 00:40:30.372 slat (nsec): min=5221, max=84458, avg=21292.78, stdev=9867.57 00:40:30.372 clat (usec): min=20757, max=76514, avg=33133.50, stdev=2587.92 00:40:30.372 lat (usec): min=20767, max=76532, avg=33154.80, stdev=2586.83 00:40:30.372 clat percentiles (usec): 00:40:30.372 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.372 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.372 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.372 | 99.00th=[34341], 99.50th=[34866], 99.90th=[76022], 99.95th=[76022], 00:40:30.372 | 99.99th=[76022] 00:40:30.372 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1920.00, stdev=71.93, samples=20 00:40:30.372 iops : min= 416, max= 512, avg=480.00, stdev=17.98, samples=20 00:40:30.372 lat (msec) : 50=99.67%, 100=0.33% 00:40:30.372 cpu : usr=98.75%, sys=0.87%, ctx=14, majf=0, minf=1639 00:40:30.372 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 filename2: (groupid=0, jobs=1): err= 0: pid=1203716: Fri Jul 12 11:46:15 2024 00:40:30.372 read: IOPS=485, BW=1943KiB/s (1989kB/s)(19.0MiB/10006msec) 00:40:30.372 slat (nsec): min=3868, max=64735, avg=22720.69, stdev=8676.96 00:40:30.372 clat (usec): min=16791, max=86512, avg=32741.15, stdev=4113.35 00:40:30.372 lat (usec): min=16799, max=86528, avg=32763.87, stdev=4113.49 00:40:30.372 clat percentiles (usec): 00:40:30.372 | 1.00th=[20317], 5.00th=[26346], 10.00th=[32375], 20.00th=[32637], 00:40:30.372 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.372 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:40:30.372 | 99.00th=[40633], 99.50th=[44827], 99.90th=[86508], 99.95th=[86508], 00:40:30.372 | 99.99th=[86508] 00:40:30.372 bw ( KiB/s): min= 1667, max= 2240, per=4.19%, avg=1936.16, stdev=103.39, samples=19 00:40:30.372 iops : min= 416, max= 560, avg=484.00, stdev=25.96, samples=19 00:40:30.372 lat (msec) : 20=0.74%, 50=98.93%, 100=0.33% 00:40:30.372 cpu : usr=98.74%, sys=0.86%, ctx=24, majf=0, minf=1636 00:40:30.372 IO depths : 1=4.6%, 2=9.3%, 4=19.7%, 8=57.7%, 16=8.7%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 filename2: (groupid=0, jobs=1): err= 0: pid=1203717: Fri Jul 12 11:46:15 2024 00:40:30.372 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.8MiB/10007msec) 00:40:30.372 slat (nsec): min=8254, max=96240, avg=26669.46, stdev=11263.28 00:40:30.372 clat (usec): min=20173, max=61452, avg=32952.34, stdev=2298.04 00:40:30.372 lat (usec): min=20182, max=61487, avg=32979.00, stdev=2298.39 00:40:30.372 clat percentiles (usec): 00:40:30.372 | 1.00th=[26084], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:40:30.372 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.372 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:40:30.372 | 99.00th=[34341], 99.50th=[51119], 99.90th=[61604], 99.95th=[61604], 00:40:30.372 | 99.99th=[61604] 00:40:30.372 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1922.53, stdev=44.06, samples=19 00:40:30.372 iops : min= 448, max= 512, avg=480.63, stdev=11.02, samples=19 00:40:30.372 lat (msec) : 50=99.42%, 100=0.58% 00:40:30.372 cpu : usr=98.83%, sys=0.81%, ctx=15, majf=0, minf=1636 00:40:30.372 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 filename2: (groupid=0, jobs=1): err= 0: pid=1203718: Fri Jul 12 11:46:15 2024 00:40:30.372 read: IOPS=493, BW=1972KiB/s (2020kB/s)(19.3MiB/10003msec) 00:40:30.372 slat (nsec): min=7297, max=96865, avg=43348.78, stdev=15617.56 00:40:30.372 clat (usec): min=2275, max=80956, avg=32267.84, stdev=5183.62 00:40:30.372 lat (usec): min=2285, max=80986, avg=32311.19, stdev=5184.00 00:40:30.372 clat percentiles (usec): 00:40:30.372 | 1.00th=[19530], 5.00th=[25035], 10.00th=[26084], 20.00th=[29754], 00:40:30.372 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:40:30.372 | 70.00th=[33162], 80.00th=[33424], 90.00th=[36963], 95.00th=[39584], 00:40:30.372 | 99.00th=[40633], 99.50th=[42206], 99.90th=[81265], 99.95th=[81265], 00:40:30.372 | 99.99th=[81265] 00:40:30.372 bw ( KiB/s): min= 1664, max= 2336, per=4.24%, avg=1958.74, stdev=116.52, samples=19 00:40:30.372 iops : min= 416, max= 584, avg=489.68, stdev=29.13, samples=19 00:40:30.372 lat (msec) : 4=0.32%, 10=0.12%, 20=1.18%, 50=98.05%, 100=0.32% 00:40:30.372 cpu : usr=98.70%, sys=0.87%, ctx=18, majf=0, minf=1634 00:40:30.372 IO depths : 1=0.2%, 2=0.5%, 4=2.9%, 8=80.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:40:30.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:30.372 issued rwts: total=4932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:30.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:30.372 00:40:30.372 Run status group 0 (all jobs): 00:40:30.372 READ: bw=45.1MiB/s (47.3MB/s), 1918KiB/s-1972KiB/s (1964kB/s-2020kB/s), io=453MiB (475MB), run=10001-10032msec 00:40:30.632 ----------------------------------------------------- 00:40:30.632 Suppressions used: 00:40:30.632 count bytes template 00:40:30.632 45 402 /usr/src/fio/parse.c 00:40:30.632 1 8 libtcmalloc_minimal.so 00:40:30.632 1 904 libcrypto.so 00:40:30.632 ----------------------------------------------------- 00:40:30.632 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.891 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 bdev_null0 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 [2024-07-12 11:46:17.106096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 bdev_null1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:30.892 { 00:40:30.892 "params": { 00:40:30.892 "name": "Nvme$subsystem", 00:40:30.892 "trtype": "$TEST_TRANSPORT", 00:40:30.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:30.892 "adrfam": "ipv4", 00:40:30.892 "trsvcid": "$NVMF_PORT", 00:40:30.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:30.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:30.892 "hdgst": ${hdgst:-false}, 00:40:30.892 "ddgst": ${ddgst:-false} 00:40:30.892 }, 00:40:30.892 "method": "bdev_nvme_attach_controller" 00:40:30.892 } 00:40:30.892 EOF 00:40:30.892 )") 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:30.892 { 00:40:30.892 "params": { 00:40:30.892 "name": "Nvme$subsystem", 00:40:30.892 "trtype": "$TEST_TRANSPORT", 00:40:30.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:30.892 "adrfam": "ipv4", 00:40:30.892 "trsvcid": "$NVMF_PORT", 00:40:30.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:30.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:30.892 "hdgst": ${hdgst:-false}, 00:40:30.892 "ddgst": ${ddgst:-false} 00:40:30.892 }, 00:40:30.892 "method": "bdev_nvme_attach_controller" 00:40:30.892 } 00:40:30.892 EOF 00:40:30.892 )") 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:30.892 "params": { 00:40:30.892 "name": "Nvme0", 00:40:30.892 "trtype": "tcp", 00:40:30.892 "traddr": "10.0.0.2", 00:40:30.892 "adrfam": "ipv4", 00:40:30.892 "trsvcid": "4420", 00:40:30.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:30.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:30.892 "hdgst": false, 00:40:30.892 "ddgst": false 00:40:30.892 }, 00:40:30.892 "method": "bdev_nvme_attach_controller" 00:40:30.892 },{ 00:40:30.892 "params": { 00:40:30.892 "name": "Nvme1", 00:40:30.892 "trtype": "tcp", 00:40:30.892 "traddr": "10.0.0.2", 00:40:30.892 "adrfam": "ipv4", 00:40:30.892 "trsvcid": "4420", 00:40:30.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:30.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:30.892 "hdgst": false, 00:40:30.892 "ddgst": false 00:40:30.892 }, 00:40:30.892 "method": "bdev_nvme_attach_controller" 00:40:30.892 }' 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:30.892 11:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:31.151 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:31.151 ... 00:40:31.151 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:31.151 ... 00:40:31.151 fio-3.35 00:40:31.151 Starting 4 threads 00:40:31.410 EAL: No free 2048 kB hugepages reported on node 1 00:40:37.974 00:40:37.974 filename0: (groupid=0, jobs=1): err= 0: pid=1205886: Fri Jul 12 11:46:23 2024 00:40:37.974 read: IOPS=2353, BW=18.4MiB/s (19.3MB/s)(92.0MiB/5003msec) 00:40:37.974 slat (nsec): min=7293, max=40801, avg=11474.82, stdev=3889.90 00:40:37.974 clat (usec): min=714, max=6406, avg=3363.33, stdev=443.65 00:40:37.974 lat (usec): min=728, max=6421, avg=3374.81, stdev=443.88 00:40:37.974 clat percentiles (usec): 00:40:37.974 | 1.00th=[ 2180], 5.00th=[ 2638], 10.00th=[ 2868], 20.00th=[ 3032], 00:40:37.974 | 30.00th=[ 3195], 40.00th=[ 3359], 50.00th=[ 3458], 60.00th=[ 3490], 00:40:37.974 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3752], 95.00th=[ 3982], 00:40:37.974 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 5997], 99.95th=[ 6259], 00:40:37.974 | 99.99th=[ 6390] 00:40:37.974 bw ( KiB/s): min=17872, max=19504, per=26.10%, avg=18803.56, stdev=505.44, samples=9 00:40:37.974 iops : min= 2234, max= 2438, avg=2350.44, stdev=63.18, samples=9 00:40:37.974 lat (usec) : 750=0.01%, 1000=0.04% 00:40:37.974 lat (msec) : 2=0.65%, 4=94.36%, 10=4.93% 00:40:37.975 cpu : usr=96.26%, sys=3.38%, ctx=8, majf=0, minf=1636 00:40:37.975 IO depths : 1=0.4%, 2=7.3%, 4=64.2%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 issued rwts: total=11776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.975 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:37.975 filename0: (groupid=0, jobs=1): err= 0: pid=1205887: Fri Jul 12 11:46:23 2024 00:40:37.975 read: IOPS=2242, BW=17.5MiB/s (18.4MB/s)(87.6MiB/5002msec) 00:40:37.975 slat (nsec): min=7328, max=76731, avg=11859.26, stdev=4014.14 00:40:37.975 clat (usec): min=931, max=8219, avg=3528.54, stdev=526.60 00:40:37.975 lat (usec): min=946, max=8244, avg=3540.40, stdev=526.40 00:40:37.975 clat percentiles (usec): 00:40:37.975 | 1.00th=[ 2376], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3228], 00:40:37.975 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:40:37.975 | 70.00th=[ 3556], 80.00th=[ 3687], 90.00th=[ 3982], 95.00th=[ 4424], 00:40:37.975 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6587], 99.95th=[ 7963], 00:40:37.975 | 99.99th=[ 8160] 00:40:37.975 bw ( KiB/s): min=17200, max=18416, per=24.79%, avg=17861.33, stdev=393.39, samples=9 00:40:37.975 iops : min= 2150, max= 2302, avg=2232.67, stdev=49.17, samples=9 00:40:37.975 lat (usec) : 1000=0.02% 00:40:37.975 lat (msec) : 2=0.36%, 4=90.10%, 10=9.53% 00:40:37.975 cpu : usr=96.52%, sys=3.12%, ctx=9, majf=0, minf=1635 00:40:37.975 IO depths : 1=0.2%, 2=7.2%, 4=65.2%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 issued rwts: total=11217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.975 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:37.975 filename1: (groupid=0, jobs=1): err= 0: pid=1205888: Fri Jul 12 11:46:23 2024 00:40:37.975 read: IOPS=2177, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5002msec) 00:40:37.975 slat (nsec): min=7200, max=45434, avg=11712.03, stdev=4039.33 00:40:37.975 clat (usec): min=663, max=6574, avg=3636.88, stdev=568.77 00:40:37.975 lat (usec): min=677, max=6582, avg=3648.59, stdev=568.45 00:40:37.975 clat percentiles (usec): 00:40:37.975 | 1.00th=[ 2507], 5.00th=[ 2933], 10.00th=[ 3163], 20.00th=[ 3392], 00:40:37.975 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:40:37.975 | 70.00th=[ 3621], 80.00th=[ 3818], 90.00th=[ 4293], 95.00th=[ 4817], 00:40:37.975 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6390], 99.95th=[ 6456], 00:40:37.975 | 99.99th=[ 6587] 00:40:37.975 bw ( KiB/s): min=16864, max=17792, per=24.24%, avg=17464.89, stdev=320.01, samples=9 00:40:37.975 iops : min= 2108, max= 2224, avg=2183.11, stdev=40.00, samples=9 00:40:37.975 lat (usec) : 750=0.02%, 1000=0.06% 00:40:37.975 lat (msec) : 2=0.39%, 4=85.28%, 10=14.25% 00:40:37.975 cpu : usr=96.18%, sys=3.44%, ctx=10, majf=0, minf=1636 00:40:37.975 IO depths : 1=0.2%, 2=5.9%, 4=66.5%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 issued rwts: total=10890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.975 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:37.975 filename1: (groupid=0, jobs=1): err= 0: pid=1205889: Fri Jul 12 11:46:23 2024 00:40:37.975 read: IOPS=2233, BW=17.5MiB/s (18.3MB/s)(87.3MiB/5002msec) 00:40:37.975 slat (nsec): min=7328, max=36198, avg=11765.59, stdev=4075.76 00:40:37.975 clat (usec): min=687, max=10749, avg=3544.33, stdev=518.04 00:40:37.975 lat (usec): min=700, max=10776, avg=3556.09, stdev=517.87 00:40:37.975 clat percentiles (usec): 00:40:37.975 | 1.00th=[ 2507], 5.00th=[ 2868], 10.00th=[ 3064], 20.00th=[ 3294], 00:40:37.975 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:40:37.975 | 70.00th=[ 3589], 80.00th=[ 3720], 90.00th=[ 4015], 95.00th=[ 4424], 00:40:37.975 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 6456], 99.95th=[10552], 00:40:37.975 | 99.99th=[10683] 00:40:37.975 bw ( KiB/s): min=16960, max=18432, per=24.90%, avg=17937.78, stdev=486.39, samples=9 00:40:37.975 iops : min= 2120, max= 2304, avg=2242.22, stdev=60.80, samples=9 00:40:37.975 lat (usec) : 750=0.01% 00:40:37.975 lat (msec) : 2=0.24%, 4=89.05%, 10=10.62%, 20=0.07% 00:40:37.975 cpu : usr=96.46%, sys=3.16%, ctx=12, majf=0, minf=1636 00:40:37.975 IO depths : 1=0.5%, 2=5.2%, 4=66.6%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:37.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:37.975 issued rwts: total=11173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:37.975 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:37.975 00:40:37.975 Run status group 0 (all jobs): 00:40:37.975 READ: bw=70.4MiB/s (73.8MB/s), 17.0MiB/s-18.4MiB/s (17.8MB/s-19.3MB/s), io=352MiB (369MB), run=5002-5003msec 00:40:38.542 ----------------------------------------------------- 00:40:38.542 Suppressions used: 00:40:38.542 count bytes template 00:40:38.542 6 52 /usr/src/fio/parse.c 00:40:38.542 1 8 libtcmalloc_minimal.so 00:40:38.542 1 904 libcrypto.so 00:40:38.542 ----------------------------------------------------- 00:40:38.542 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.542 00:40:38.542 real 0m28.378s 00:40:38.542 user 4m56.104s 00:40:38.542 sys 0m4.907s 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:38.542 11:46:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:38.542 ************************************ 00:40:38.542 END TEST fio_dif_rand_params 00:40:38.542 ************************************ 00:40:38.800 11:46:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:38.800 11:46:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:38.800 11:46:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:38.800 11:46:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:38.800 11:46:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:38.801 ************************************ 00:40:38.801 START TEST fio_dif_digest 00:40:38.801 ************************************ 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:38.801 bdev_null0 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:38.801 [2024-07-12 11:46:24.976092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:38.801 { 00:40:38.801 "params": { 00:40:38.801 "name": "Nvme$subsystem", 00:40:38.801 "trtype": "$TEST_TRANSPORT", 00:40:38.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:38.801 "adrfam": "ipv4", 00:40:38.801 "trsvcid": "$NVMF_PORT", 00:40:38.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:38.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:38.801 "hdgst": ${hdgst:-false}, 00:40:38.801 "ddgst": ${ddgst:-false} 00:40:38.801 }, 00:40:38.801 "method": "bdev_nvme_attach_controller" 00:40:38.801 } 00:40:38.801 EOF 00:40:38.801 )") 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:40:38.801 11:46:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:38.801 "params": { 00:40:38.801 "name": "Nvme0", 00:40:38.801 "trtype": "tcp", 00:40:38.801 "traddr": "10.0.0.2", 00:40:38.801 "adrfam": "ipv4", 00:40:38.801 "trsvcid": "4420", 00:40:38.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:38.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:38.801 "hdgst": true, 00:40:38.801 "ddgst": true 00:40:38.801 }, 00:40:38.801 "method": "bdev_nvme_attach_controller" 00:40:38.801 }' 00:40:38.801 11:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:38.801 11:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:38.801 11:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:40:38.801 11:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:38.801 11:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:39.060 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:39.060 ... 00:40:39.060 fio-3.35 00:40:39.060 Starting 3 threads 00:40:39.060 EAL: No free 2048 kB hugepages reported on node 1 00:40:51.257 00:40:51.257 filename0: (groupid=0, jobs=1): err= 0: pid=1207168: Fri Jul 12 11:46:36 2024 00:40:51.257 read: IOPS=255, BW=32.0MiB/s (33.6MB/s)(322MiB/10048msec) 00:40:51.257 slat (nsec): min=7601, max=95474, avg=13769.31, stdev=2376.71 00:40:51.257 clat (usec): min=7492, max=51060, avg=11686.43, stdev=1358.76 00:40:51.257 lat (usec): min=7500, max=51073, avg=11700.20, stdev=1358.80 00:40:51.257 clat percentiles (usec): 00:40:51.257 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:40:51.257 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:40:51.257 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:40:51.257 | 99.00th=[13698], 99.50th=[14091], 99.90th=[15401], 99.95th=[47449], 00:40:51.257 | 99.99th=[51119] 00:40:51.257 bw ( KiB/s): min=31488, max=34560, per=35.67%, avg=32896.00, stdev=836.78, samples=20 00:40:51.257 iops : min= 246, max= 270, avg=257.00, stdev= 6.54, samples=20 00:40:51.257 lat (msec) : 10=2.72%, 20=97.20%, 50=0.04%, 100=0.04% 00:40:51.257 cpu : usr=95.69%, sys=3.93%, ctx=24, majf=0, minf=1640 00:40:51.257 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:51.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.257 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.257 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:51.257 filename0: (groupid=0, jobs=1): err= 0: pid=1207170: Fri Jul 12 11:46:36 2024 00:40:51.257 read: IOPS=233, BW=29.1MiB/s (30.6MB/s)(293MiB/10045msec) 00:40:51.257 slat (nsec): min=8023, max=35506, avg=14236.85, stdev=1790.47 00:40:51.257 clat (usec): min=9141, max=54413, avg=12832.04, stdev=1473.21 00:40:51.257 lat (usec): min=9154, max=54426, avg=12846.28, stdev=1473.33 00:40:51.257 clat percentiles (usec): 00:40:51.257 | 1.00th=[10683], 5.00th=[11338], 10.00th=[11600], 20.00th=[11994], 00:40:51.257 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:40:51.257 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:40:51.257 | 99.00th=[15139], 99.50th=[15401], 99.90th=[16188], 99.95th=[47973], 00:40:51.257 | 99.99th=[54264] 00:40:51.257 bw ( KiB/s): min=28672, max=31744, per=32.48%, avg=29952.00, stdev=968.61, samples=20 00:40:51.257 iops : min= 224, max= 248, avg=234.00, stdev= 7.57, samples=20 00:40:51.257 lat (msec) : 10=0.26%, 20=99.66%, 50=0.04%, 100=0.04% 00:40:51.257 cpu : usr=96.05%, sys=3.57%, ctx=20, majf=0, minf=1632 00:40:51.257 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:51.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.257 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.257 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:51.257 filename0: (groupid=0, jobs=1): err= 0: pid=1207171: Fri Jul 12 11:46:36 2024 00:40:51.257 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(291MiB/10004msec) 00:40:51.257 slat (nsec): min=8003, max=33639, avg=14138.22, stdev=1842.97 00:40:51.257 clat (usec): min=5786, max=20185, avg=12890.89, stdev=934.48 00:40:51.257 lat (usec): min=5798, max=20219, avg=12905.03, stdev=934.74 00:40:51.257 clat percentiles (usec): 00:40:51.257 | 1.00th=[10945], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:40:51.257 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:40:51.257 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[14353], 00:40:51.257 | 99.00th=[15270], 99.50th=[15533], 99.90th=[20055], 99.95th=[20055], 00:40:51.257 | 99.99th=[20317] 00:40:51.257 bw ( KiB/s): min=27648, max=30720, per=32.22%, avg=29709.47, stdev=706.12, samples=19 00:40:51.257 iops : min= 216, max= 240, avg=232.11, stdev= 5.52, samples=19 00:40:51.257 lat (msec) : 10=0.13%, 20=99.74%, 50=0.13% 00:40:51.257 cpu : usr=96.07%, sys=3.53%, ctx=45, majf=0, minf=1635 00:40:51.257 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:51.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.257 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.257 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:51.257 00:40:51.257 Run status group 0 (all jobs): 00:40:51.257 READ: bw=90.1MiB/s (94.4MB/s), 29.1MiB/s-32.0MiB/s (30.5MB/s-33.6MB/s), io=905MiB (949MB), run=10004-10048msec 00:40:51.257 ----------------------------------------------------- 00:40:51.257 Suppressions used: 00:40:51.257 count bytes template 00:40:51.257 5 44 /usr/src/fio/parse.c 00:40:51.257 1 8 libtcmalloc_minimal.so 00:40:51.257 1 904 libcrypto.so 00:40:51.257 ----------------------------------------------------- 00:40:51.257 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.257 00:40:51.257 real 0m12.550s 00:40:51.257 user 0m36.656s 00:40:51.257 sys 0m1.623s 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:51.257 11:46:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:51.257 ************************************ 00:40:51.257 END TEST fio_dif_digest 00:40:51.257 ************************************ 00:40:51.257 11:46:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:40:51.257 11:46:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:51.257 11:46:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:51.257 11:46:37 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:51.257 11:46:37 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:40:51.257 11:46:37 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:51.257 11:46:37 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:40:51.257 11:46:37 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:51.258 11:46:37 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:51.258 rmmod nvme_tcp 00:40:51.258 rmmod nvme_fabrics 00:40:51.258 rmmod nvme_keyring 00:40:51.258 11:46:37 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:51.258 11:46:37 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:40:51.258 11:46:37 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:40:51.258 11:46:37 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1197693 ']' 00:40:51.258 11:46:37 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1197693 00:40:51.258 11:46:37 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1197693 ']' 00:40:51.258 11:46:37 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1197693 00:40:51.258 11:46:37 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:40:51.258 11:46:37 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:51.258 11:46:37 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1197693 00:40:51.517 11:46:37 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:51.517 11:46:37 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:51.517 11:46:37 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1197693' 00:40:51.517 killing process with pid 1197693 00:40:51.517 11:46:37 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1197693 00:40:51.517 11:46:37 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1197693 00:40:52.895 11:46:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:52.895 11:46:38 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:55.433 Waiting for block devices as requested 00:40:55.433 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:55.433 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:55.433 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:55.433 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:55.433 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:55.433 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:55.433 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:55.692 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:55.692 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:55.692 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:55.692 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:55.952 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:55.952 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:55.952 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:56.237 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:56.237 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:56.237 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:56.237 11:46:42 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:56.237 11:46:42 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:56.237 11:46:42 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:56.237 11:46:42 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:56.237 11:46:42 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:56.237 11:46:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:56.237 11:46:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.787 11:46:44 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:58.787 00:40:58.787 real 1m22.114s 00:40:58.787 user 7m26.418s 00:40:58.787 sys 0m18.730s 00:40:58.787 11:46:44 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:58.787 11:46:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:58.787 ************************************ 00:40:58.787 END TEST nvmf_dif 00:40:58.787 ************************************ 00:40:58.787 11:46:44 -- common/autotest_common.sh@1142 -- # return 0 00:40:58.787 11:46:44 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:58.787 11:46:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:58.787 11:46:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:58.787 11:46:44 -- common/autotest_common.sh@10 -- # set +x 00:40:58.787 ************************************ 00:40:58.787 START TEST nvmf_abort_qd_sizes 00:40:58.787 ************************************ 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:58.787 * Looking for test storage... 00:40:58.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:58.787 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:40:58.788 11:46:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:04.062 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:04.062 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:04.062 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:04.063 Found net devices under 0000:86:00.0: cvl_0_0 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:04.063 Found net devices under 0000:86:00.1: cvl_0_1 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:04.063 11:46:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:04.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:04.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:41:04.063 00:41:04.063 --- 10.0.0.2 ping statistics --- 00:41:04.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.063 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:04.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:04.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:41:04.063 00:41:04.063 --- 10.0.0.1 ping statistics --- 00:41:04.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.063 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:04.063 11:46:50 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:06.600 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:06.600 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:07.537 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:41:07.537 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:07.537 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:07.537 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:07.537 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:07.537 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:07.537 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1215310 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1215310 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1215310 ']' 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:07.796 11:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:07.796 [2024-07-12 11:46:54.012868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:07.796 [2024-07-12 11:46:54.012954] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:07.796 EAL: No free 2048 kB hugepages reported on node 1 00:41:07.796 [2024-07-12 11:46:54.124913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:08.056 [2024-07-12 11:46:54.353943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:08.056 [2024-07-12 11:46:54.353990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:08.056 [2024-07-12 11:46:54.354002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:08.056 [2024-07-12 11:46:54.354011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:08.056 [2024-07-12 11:46:54.354021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:08.056 [2024-07-12 11:46:54.354095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:08.056 [2024-07-12 11:46:54.354182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:08.056 [2024-07-12 11:46:54.354204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.056 [2024-07-12 11:46:54.354204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:08.624 11:46:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:08.624 ************************************ 00:41:08.624 START TEST spdk_target_abort 00:41:08.624 ************************************ 00:41:08.624 11:46:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:41:08.624 11:46:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:08.624 11:46:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:41:08.624 11:46:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:08.624 11:46:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:11.911 spdk_targetn1 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:11.911 [2024-07-12 11:46:57.746343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:11.911 [2024-07-12 11:46:57.793863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:11.911 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:11.912 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:11.912 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:11.912 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:11.912 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:11.912 11:46:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:11.912 EAL: No free 2048 kB hugepages reported on node 1 00:41:15.196 Initializing NVMe Controllers 00:41:15.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:15.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:15.196 Initialization complete. Launching workers. 00:41:15.196 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14207, failed: 0 00:41:15.196 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1327, failed to submit 12880 00:41:15.196 success 756, unsuccess 571, failed 0 00:41:15.196 11:47:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:15.196 11:47:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:15.196 EAL: No free 2048 kB hugepages reported on node 1 00:41:18.482 Initializing NVMe Controllers 00:41:18.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:18.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:18.482 Initialization complete. Launching workers. 00:41:18.482 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8447, failed: 0 00:41:18.482 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1272, failed to submit 7175 00:41:18.482 success 300, unsuccess 972, failed 0 00:41:18.482 11:47:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:18.482 11:47:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:18.482 EAL: No free 2048 kB hugepages reported on node 1 00:41:21.770 Initializing NVMe Controllers 00:41:21.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:21.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:21.770 Initialization complete. Launching workers. 00:41:21.770 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33300, failed: 0 00:41:21.770 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2773, failed to submit 30527 00:41:21.770 success 552, unsuccess 2221, failed 0 00:41:21.770 11:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:21.770 11:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:21.770 11:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:21.770 11:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:21.770 11:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:21.770 11:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:21.770 11:47:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1215310 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1215310 ']' 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1215310 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1215310 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1215310' 00:41:23.147 killing process with pid 1215310 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1215310 00:41:23.147 11:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1215310 00:41:24.084 00:41:24.084 real 0m15.524s 00:41:24.084 user 1m0.107s 00:41:24.084 sys 0m2.236s 00:41:24.084 11:47:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:24.084 11:47:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:24.084 ************************************ 00:41:24.084 END TEST spdk_target_abort 00:41:24.084 ************************************ 00:41:24.084 11:47:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:41:24.084 11:47:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:24.084 11:47:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:24.084 11:47:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:24.084 11:47:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:24.377 ************************************ 00:41:24.377 START TEST kernel_target_abort 00:41:24.377 ************************************ 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:24.377 11:47:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:26.914 Waiting for block devices as requested 00:41:26.914 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:41:26.914 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:26.914 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:26.914 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:27.174 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:27.174 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:27.174 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:27.174 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:27.433 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:27.433 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:27.433 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:27.433 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:27.692 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:27.692 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:27.692 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:27.951 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:27.951 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:28.519 No valid GPT data, bailing 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:28.519 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:41:28.778 00:41:28.778 Discovery Log Number of Records 2, Generation counter 2 00:41:28.778 =====Discovery Log Entry 0====== 00:41:28.778 trtype: tcp 00:41:28.778 adrfam: ipv4 00:41:28.778 subtype: current discovery subsystem 00:41:28.778 treq: not specified, sq flow control disable supported 00:41:28.778 portid: 1 00:41:28.778 trsvcid: 4420 00:41:28.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:28.778 traddr: 10.0.0.1 00:41:28.778 eflags: none 00:41:28.778 sectype: none 00:41:28.778 =====Discovery Log Entry 1====== 00:41:28.778 trtype: tcp 00:41:28.778 adrfam: ipv4 00:41:28.778 subtype: nvme subsystem 00:41:28.778 treq: not specified, sq flow control disable supported 00:41:28.778 portid: 1 00:41:28.778 trsvcid: 4420 00:41:28.778 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:28.778 traddr: 10.0.0.1 00:41:28.778 eflags: none 00:41:28.778 sectype: none 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:28.778 11:47:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:28.778 EAL: No free 2048 kB hugepages reported on node 1 00:41:32.065 Initializing NVMe Controllers 00:41:32.065 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:32.065 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:32.065 Initialization complete. Launching workers. 00:41:32.065 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79741, failed: 0 00:41:32.065 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 79741, failed to submit 0 00:41:32.065 success 0, unsuccess 79741, failed 0 00:41:32.065 11:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:32.065 11:47:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:32.065 EAL: No free 2048 kB hugepages reported on node 1 00:41:35.353 Initializing NVMe Controllers 00:41:35.353 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:35.353 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:35.353 Initialization complete. Launching workers. 00:41:35.353 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 126021, failed: 0 00:41:35.353 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31622, failed to submit 94399 00:41:35.353 success 0, unsuccess 31622, failed 0 00:41:35.353 11:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:35.353 11:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:35.353 EAL: No free 2048 kB hugepages reported on node 1 00:41:38.643 Initializing NVMe Controllers 00:41:38.643 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:38.643 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:38.643 Initialization complete. Launching workers. 00:41:38.643 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 121043, failed: 0 00:41:38.643 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30274, failed to submit 90769 00:41:38.643 success 0, unsuccess 30274, failed 0 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:41:38.643 11:47:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:40.550 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:40.550 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:41.488 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:41:41.488 00:41:41.488 real 0m17.240s 00:41:41.488 user 0m8.886s 00:41:41.488 sys 0m5.054s 00:41:41.488 11:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:41.488 11:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:41.488 ************************************ 00:41:41.488 END TEST kernel_target_abort 00:41:41.488 ************************************ 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:41.488 rmmod nvme_tcp 00:41:41.488 rmmod nvme_fabrics 00:41:41.488 rmmod nvme_keyring 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1215310 ']' 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1215310 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1215310 ']' 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1215310 00:41:41.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1215310) - No such process 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1215310 is not found' 00:41:41.488 Process with pid 1215310 is not found 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:41:41.488 11:47:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:44.023 Waiting for block devices as requested 00:41:44.023 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:41:44.282 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:44.282 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:44.282 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:44.540 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:44.540 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:44.540 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:44.540 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:44.799 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:44.799 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:44.799 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:44.799 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:45.058 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:45.058 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:45.058 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:45.316 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:45.316 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:45.316 11:47:31 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:45.316 11:47:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:45.316 11:47:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:45.316 11:47:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:45.316 11:47:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:45.316 11:47:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:45.316 11:47:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:47.880 11:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:47.880 00:41:47.880 real 0m48.963s 00:41:47.880 user 1m13.038s 00:41:47.880 sys 0m15.399s 00:41:47.880 11:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:47.880 11:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:47.880 ************************************ 00:41:47.880 END TEST nvmf_abort_qd_sizes 00:41:47.880 ************************************ 00:41:47.880 11:47:33 -- common/autotest_common.sh@1142 -- # return 0 00:41:47.880 11:47:33 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:47.880 11:47:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:47.880 11:47:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:47.880 11:47:33 -- common/autotest_common.sh@10 -- # set +x 00:41:47.880 ************************************ 00:41:47.880 START TEST keyring_file 00:41:47.880 ************************************ 00:41:47.880 11:47:33 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:47.880 * Looking for test storage... 00:41:47.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:47.880 11:47:33 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:47.880 11:47:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:47.880 11:47:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:47.880 11:47:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:47.880 11:47:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.880 11:47:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.880 11:47:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.880 11:47:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:47.880 11:47:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:47.880 11:47:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:47.880 11:47:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:47.880 11:47:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:47.880 11:47:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:47.880 11:47:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:47.880 11:47:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iPryIUCoa2 00:41:47.880 11:47:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:47.880 11:47:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iPryIUCoa2 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iPryIUCoa2 00:41:47.881 11:47:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iPryIUCoa2 00:41:47.881 11:47:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XcaHthRYVn 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:47.881 11:47:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:47.881 11:47:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:47.881 11:47:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:47.881 11:47:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:41:47.881 11:47:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:47.881 11:47:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XcaHthRYVn 00:41:47.881 11:47:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XcaHthRYVn 00:41:47.881 11:47:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.XcaHthRYVn 00:41:47.881 11:47:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=1224467 00:41:47.881 11:47:33 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:47.881 11:47:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1224467 00:41:47.881 11:47:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1224467 ']' 00:41:47.881 11:47:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:47.881 11:47:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:47.881 11:47:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:47.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:47.881 11:47:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:47.881 11:47:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:47.881 [2024-07-12 11:47:34.006808] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:47.881 [2024-07-12 11:47:34.006903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224467 ] 00:41:47.881 EAL: No free 2048 kB hugepages reported on node 1 00:41:47.881 [2024-07-12 11:47:34.112058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:48.148 [2024-07-12 11:47:34.329958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:41:49.083 11:47:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:49.083 [2024-07-12 11:47:35.214141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:49.083 null0 00:41:49.083 [2024-07-12 11:47:35.246173] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:49.083 [2024-07-12 11:47:35.246545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:49.083 [2024-07-12 11:47:35.254221] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:49.083 11:47:35 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:49.083 [2024-07-12 11:47:35.266227] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:49.083 request: 00:41:49.083 { 00:41:49.083 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:49.083 "secure_channel": false, 00:41:49.083 "listen_address": { 00:41:49.083 "trtype": "tcp", 00:41:49.083 "traddr": "127.0.0.1", 00:41:49.083 "trsvcid": "4420" 00:41:49.083 }, 00:41:49.083 "method": "nvmf_subsystem_add_listener", 00:41:49.083 "req_id": 1 00:41:49.083 } 00:41:49.083 Got JSON-RPC error response 00:41:49.083 response: 00:41:49.083 { 00:41:49.083 "code": -32602, 00:41:49.083 "message": "Invalid parameters" 00:41:49.083 } 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:49.083 11:47:35 keyring_file -- keyring/file.sh@46 -- # bperfpid=1224638 00:41:49.083 11:47:35 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1224638 /var/tmp/bperf.sock 00:41:49.083 11:47:35 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1224638 ']' 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:49.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:49.083 11:47:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:49.083 [2024-07-12 11:47:35.343704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:49.084 [2024-07-12 11:47:35.343795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224638 ] 00:41:49.084 EAL: No free 2048 kB hugepages reported on node 1 00:41:49.342 [2024-07-12 11:47:35.447875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.342 [2024-07-12 11:47:35.667837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:49.910 11:47:36 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:49.910 11:47:36 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:41:49.910 11:47:36 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:49.910 11:47:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:50.170 11:47:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XcaHthRYVn 00:41:50.170 11:47:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XcaHthRYVn 00:41:50.170 11:47:36 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:41:50.170 11:47:36 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:41:50.170 11:47:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:50.170 11:47:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:50.170 11:47:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:50.429 11:47:36 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.iPryIUCoa2 == \/\t\m\p\/\t\m\p\.\i\P\r\y\I\U\C\o\a\2 ]] 00:41:50.429 11:47:36 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:41:50.429 11:47:36 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:50.429 11:47:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:50.429 11:47:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:50.429 11:47:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:50.688 11:47:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XcaHthRYVn == \/\t\m\p\/\t\m\p\.\X\c\a\H\t\h\R\Y\V\n ]] 00:41:50.688 11:47:36 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:41:50.688 11:47:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:50.688 11:47:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:50.688 11:47:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:50.688 11:47:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:50.688 11:47:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:50.688 11:47:37 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:41:50.688 11:47:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:41:50.688 11:47:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:50.688 11:47:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:50.688 11:47:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:50.688 11:47:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:50.688 11:47:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:50.945 11:47:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:50.945 11:47:37 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:50.945 11:47:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:51.203 [2024-07-12 11:47:37.342684] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:51.203 nvme0n1 00:41:51.203 11:47:37 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:41:51.203 11:47:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:51.203 11:47:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:51.203 11:47:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:51.203 11:47:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:51.203 11:47:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:51.462 11:47:37 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:41:51.462 11:47:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:41:51.462 11:47:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:51.462 11:47:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:51.462 11:47:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:51.462 11:47:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:51.462 11:47:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:51.462 11:47:37 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:41:51.462 11:47:37 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:51.721 Running I/O for 1 seconds... 00:41:52.658 00:41:52.658 Latency(us) 00:41:52.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.658 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:52.658 nvme0n1 : 1.00 14690.75 57.39 0.00 0.00 8689.36 4445.05 16526.47 00:41:52.658 =================================================================================================================== 00:41:52.658 Total : 14690.75 57.39 0.00 0.00 8689.36 4445.05 16526.47 00:41:52.658 0 00:41:52.658 11:47:38 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:52.658 11:47:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:52.917 11:47:39 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:41:52.917 11:47:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:52.917 11:47:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:52.917 11:47:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:52.917 11:47:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:52.917 11:47:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:53.176 11:47:39 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:41:53.176 11:47:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:41:53.176 11:47:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:53.176 11:47:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:53.176 11:47:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:53.176 11:47:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:53.176 11:47:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:53.176 11:47:39 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:53.176 11:47:39 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:53.176 11:47:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:53.176 11:47:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:53.176 11:47:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:53.176 11:47:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:53.176 11:47:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:53.176 11:47:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:53.176 11:47:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:53.176 11:47:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:53.435 [2024-07-12 11:47:39.632111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:53.435 [2024-07-12 11:47:39.632425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332280 (107): Transport endpoint is not connected 00:41:53.435 [2024-07-12 11:47:39.633409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332280 (9): Bad file descriptor 00:41:53.435 [2024-07-12 11:47:39.634407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.435 [2024-07-12 11:47:39.634436] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:53.435 [2024-07-12 11:47:39.634449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.435 request: 00:41:53.435 { 00:41:53.435 "name": "nvme0", 00:41:53.435 "trtype": "tcp", 00:41:53.435 "traddr": "127.0.0.1", 00:41:53.435 "adrfam": "ipv4", 00:41:53.435 "trsvcid": "4420", 00:41:53.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:53.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:53.435 "prchk_reftag": false, 00:41:53.435 "prchk_guard": false, 00:41:53.435 "hdgst": false, 00:41:53.435 "ddgst": false, 00:41:53.435 "psk": "key1", 00:41:53.435 "method": "bdev_nvme_attach_controller", 00:41:53.435 "req_id": 1 00:41:53.435 } 00:41:53.435 Got JSON-RPC error response 00:41:53.435 response: 00:41:53.435 { 00:41:53.435 "code": -5, 00:41:53.435 "message": "Input/output error" 00:41:53.435 } 00:41:53.435 11:47:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:53.435 11:47:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:53.435 11:47:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:53.435 11:47:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:53.435 11:47:39 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:41:53.435 11:47:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:53.435 11:47:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:53.435 11:47:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:53.435 11:47:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:53.435 11:47:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:53.694 11:47:39 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:41:53.694 11:47:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:41:53.694 11:47:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:53.694 11:47:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:53.694 11:47:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:53.694 11:47:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:53.694 11:47:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:53.694 11:47:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:53.694 11:47:40 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:41:53.694 11:47:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:53.953 11:47:40 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:41:53.953 11:47:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:54.212 11:47:40 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:41:54.212 11:47:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:54.212 11:47:40 keyring_file -- keyring/file.sh@77 -- # jq length 00:41:54.212 11:47:40 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:41:54.212 11:47:40 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.iPryIUCoa2 00:41:54.212 11:47:40 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:54.212 11:47:40 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:54.212 11:47:40 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:54.212 11:47:40 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:54.212 11:47:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:54.212 11:47:40 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:54.212 11:47:40 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:54.212 11:47:40 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:54.212 11:47:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:54.471 [2024-07-12 11:47:40.710966] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iPryIUCoa2': 0100660 00:41:54.471 [2024-07-12 11:47:40.711002] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:54.471 request: 00:41:54.471 { 00:41:54.471 "name": "key0", 00:41:54.471 "path": "/tmp/tmp.iPryIUCoa2", 00:41:54.471 "method": "keyring_file_add_key", 00:41:54.471 "req_id": 1 00:41:54.471 } 00:41:54.471 Got JSON-RPC error response 00:41:54.471 response: 00:41:54.471 { 00:41:54.471 "code": -1, 00:41:54.471 "message": "Operation not permitted" 00:41:54.471 } 00:41:54.471 11:47:40 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:54.471 11:47:40 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:54.471 11:47:40 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:54.471 11:47:40 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:54.471 11:47:40 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.iPryIUCoa2 00:41:54.471 11:47:40 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:54.471 11:47:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iPryIUCoa2 00:41:54.730 11:47:40 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.iPryIUCoa2 00:41:54.730 11:47:40 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:41:54.730 11:47:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:54.730 11:47:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:54.730 11:47:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:54.730 11:47:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:54.730 11:47:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:54.730 11:47:41 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:41:54.730 11:47:41 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:54.730 11:47:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:54.730 11:47:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:54.730 11:47:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:54.990 11:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:54.990 [2024-07-12 11:47:41.236421] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iPryIUCoa2': No such file or directory 00:41:54.990 [2024-07-12 11:47:41.236463] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:54.990 [2024-07-12 11:47:41.236488] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:54.990 [2024-07-12 11:47:41.236497] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:54.990 [2024-07-12 11:47:41.236507] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:54.990 request: 00:41:54.990 { 00:41:54.990 "name": "nvme0", 00:41:54.990 "trtype": "tcp", 00:41:54.990 "traddr": "127.0.0.1", 00:41:54.990 "adrfam": "ipv4", 00:41:54.990 "trsvcid": "4420", 00:41:54.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:54.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:54.990 "prchk_reftag": false, 00:41:54.990 "prchk_guard": false, 00:41:54.990 "hdgst": false, 00:41:54.990 "ddgst": false, 00:41:54.990 "psk": "key0", 00:41:54.990 "method": "bdev_nvme_attach_controller", 00:41:54.990 "req_id": 1 00:41:54.990 } 00:41:54.990 Got JSON-RPC error response 00:41:54.990 response: 00:41:54.990 { 00:41:54.990 "code": -19, 00:41:54.990 "message": "No such device" 00:41:54.990 } 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:54.990 11:47:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:54.990 11:47:41 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:41:54.990 11:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:55.250 11:47:41 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.thfsWnoC3F 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:55.250 11:47:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:55.250 11:47:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.250 11:47:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:55.250 11:47:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:55.250 11:47:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:55.250 11:47:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.thfsWnoC3F 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.thfsWnoC3F 00:41:55.250 11:47:41 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.thfsWnoC3F 00:41:55.250 11:47:41 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.thfsWnoC3F 00:41:55.250 11:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.thfsWnoC3F 00:41:55.509 11:47:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:55.509 11:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:55.768 nvme0n1 00:41:55.768 11:47:41 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:41:55.768 11:47:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:55.768 11:47:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:55.768 11:47:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:55.768 11:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:55.768 11:47:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:55.768 11:47:42 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:41:55.768 11:47:42 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:41:55.768 11:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:56.027 11:47:42 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:41:56.027 11:47:42 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:41:56.027 11:47:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:56.027 11:47:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:56.027 11:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:56.286 11:47:42 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:41:56.286 11:47:42 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:41:56.286 11:47:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:56.286 11:47:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:56.286 11:47:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:56.286 11:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:56.286 11:47:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:56.286 11:47:42 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:41:56.286 11:47:42 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:56.286 11:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:56.545 11:47:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:41:56.545 11:47:42 keyring_file -- keyring/file.sh@104 -- # jq length 00:41:56.545 11:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:56.806 11:47:42 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:41:56.806 11:47:42 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.thfsWnoC3F 00:41:56.806 11:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.thfsWnoC3F 00:41:56.806 11:47:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XcaHthRYVn 00:41:56.806 11:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XcaHthRYVn 00:41:57.063 11:47:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:57.063 11:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:57.321 nvme0n1 00:41:57.321 11:47:43 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:41:57.321 11:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:57.579 11:47:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:41:57.579 "subsystems": [ 00:41:57.579 { 00:41:57.579 "subsystem": "keyring", 00:41:57.579 "config": [ 00:41:57.579 { 00:41:57.579 "method": "keyring_file_add_key", 00:41:57.579 "params": { 00:41:57.579 "name": "key0", 00:41:57.579 "path": "/tmp/tmp.thfsWnoC3F" 00:41:57.579 } 00:41:57.579 }, 00:41:57.579 { 00:41:57.579 "method": "keyring_file_add_key", 00:41:57.579 "params": { 00:41:57.579 "name": "key1", 00:41:57.579 "path": "/tmp/tmp.XcaHthRYVn" 00:41:57.579 } 00:41:57.579 } 00:41:57.579 ] 00:41:57.579 }, 00:41:57.579 { 00:41:57.579 "subsystem": "iobuf", 00:41:57.579 "config": [ 00:41:57.579 { 00:41:57.579 "method": "iobuf_set_options", 00:41:57.580 "params": { 00:41:57.580 "small_pool_count": 8192, 00:41:57.580 "large_pool_count": 1024, 00:41:57.580 "small_bufsize": 8192, 00:41:57.580 "large_bufsize": 135168 00:41:57.580 } 00:41:57.580 } 00:41:57.580 ] 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "subsystem": "sock", 00:41:57.580 "config": [ 00:41:57.580 { 00:41:57.580 "method": "sock_set_default_impl", 00:41:57.580 "params": { 00:41:57.580 "impl_name": "posix" 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "sock_impl_set_options", 00:41:57.580 "params": { 00:41:57.580 "impl_name": "ssl", 00:41:57.580 "recv_buf_size": 4096, 00:41:57.580 "send_buf_size": 4096, 00:41:57.580 "enable_recv_pipe": true, 00:41:57.580 "enable_quickack": false, 00:41:57.580 "enable_placement_id": 0, 00:41:57.580 "enable_zerocopy_send_server": true, 00:41:57.580 "enable_zerocopy_send_client": false, 00:41:57.580 "zerocopy_threshold": 0, 00:41:57.580 "tls_version": 0, 00:41:57.580 "enable_ktls": false 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "sock_impl_set_options", 00:41:57.580 "params": { 00:41:57.580 "impl_name": "posix", 00:41:57.580 "recv_buf_size": 2097152, 00:41:57.580 "send_buf_size": 2097152, 00:41:57.580 "enable_recv_pipe": true, 00:41:57.580 "enable_quickack": false, 00:41:57.580 "enable_placement_id": 0, 00:41:57.580 "enable_zerocopy_send_server": true, 00:41:57.580 "enable_zerocopy_send_client": false, 00:41:57.580 "zerocopy_threshold": 0, 00:41:57.580 "tls_version": 0, 00:41:57.580 "enable_ktls": false 00:41:57.580 } 00:41:57.580 } 00:41:57.580 ] 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "subsystem": "vmd", 00:41:57.580 "config": [] 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "subsystem": "accel", 00:41:57.580 "config": [ 00:41:57.580 { 00:41:57.580 "method": "accel_set_options", 00:41:57.580 "params": { 00:41:57.580 "small_cache_size": 128, 00:41:57.580 "large_cache_size": 16, 00:41:57.580 "task_count": 2048, 00:41:57.580 "sequence_count": 2048, 00:41:57.580 "buf_count": 2048 00:41:57.580 } 00:41:57.580 } 00:41:57.580 ] 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "subsystem": "bdev", 00:41:57.580 "config": [ 00:41:57.580 { 00:41:57.580 "method": "bdev_set_options", 00:41:57.580 "params": { 00:41:57.580 "bdev_io_pool_size": 65535, 00:41:57.580 "bdev_io_cache_size": 256, 00:41:57.580 "bdev_auto_examine": true, 00:41:57.580 "iobuf_small_cache_size": 128, 00:41:57.580 "iobuf_large_cache_size": 16 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "bdev_raid_set_options", 00:41:57.580 "params": { 00:41:57.580 "process_window_size_kb": 1024 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "bdev_iscsi_set_options", 00:41:57.580 "params": { 00:41:57.580 "timeout_sec": 30 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "bdev_nvme_set_options", 00:41:57.580 "params": { 00:41:57.580 "action_on_timeout": "none", 00:41:57.580 "timeout_us": 0, 00:41:57.580 "timeout_admin_us": 0, 00:41:57.580 "keep_alive_timeout_ms": 10000, 00:41:57.580 "arbitration_burst": 0, 00:41:57.580 "low_priority_weight": 0, 00:41:57.580 "medium_priority_weight": 0, 00:41:57.580 "high_priority_weight": 0, 00:41:57.580 "nvme_adminq_poll_period_us": 10000, 00:41:57.580 "nvme_ioq_poll_period_us": 0, 00:41:57.580 "io_queue_requests": 512, 00:41:57.580 "delay_cmd_submit": true, 00:41:57.580 "transport_retry_count": 4, 00:41:57.580 "bdev_retry_count": 3, 00:41:57.580 "transport_ack_timeout": 0, 00:41:57.580 "ctrlr_loss_timeout_sec": 0, 00:41:57.580 "reconnect_delay_sec": 0, 00:41:57.580 "fast_io_fail_timeout_sec": 0, 00:41:57.580 "disable_auto_failback": false, 00:41:57.580 "generate_uuids": false, 00:41:57.580 "transport_tos": 0, 00:41:57.580 "nvme_error_stat": false, 00:41:57.580 "rdma_srq_size": 0, 00:41:57.580 "io_path_stat": false, 00:41:57.580 "allow_accel_sequence": false, 00:41:57.580 "rdma_max_cq_size": 0, 00:41:57.580 "rdma_cm_event_timeout_ms": 0, 00:41:57.580 "dhchap_digests": [ 00:41:57.580 "sha256", 00:41:57.580 "sha384", 00:41:57.580 "sha512" 00:41:57.580 ], 00:41:57.580 "dhchap_dhgroups": [ 00:41:57.580 "null", 00:41:57.580 "ffdhe2048", 00:41:57.580 "ffdhe3072", 00:41:57.580 "ffdhe4096", 00:41:57.580 "ffdhe6144", 00:41:57.580 "ffdhe8192" 00:41:57.580 ] 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "bdev_nvme_attach_controller", 00:41:57.580 "params": { 00:41:57.580 "name": "nvme0", 00:41:57.580 "trtype": "TCP", 00:41:57.580 "adrfam": "IPv4", 00:41:57.580 "traddr": "127.0.0.1", 00:41:57.580 "trsvcid": "4420", 00:41:57.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:57.580 "prchk_reftag": false, 00:41:57.580 "prchk_guard": false, 00:41:57.580 "ctrlr_loss_timeout_sec": 0, 00:41:57.580 "reconnect_delay_sec": 0, 00:41:57.580 "fast_io_fail_timeout_sec": 0, 00:41:57.580 "psk": "key0", 00:41:57.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:57.580 "hdgst": false, 00:41:57.580 "ddgst": false 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "bdev_nvme_set_hotplug", 00:41:57.580 "params": { 00:41:57.580 "period_us": 100000, 00:41:57.580 "enable": false 00:41:57.580 } 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "method": "bdev_wait_for_examine" 00:41:57.580 } 00:41:57.580 ] 00:41:57.580 }, 00:41:57.580 { 00:41:57.580 "subsystem": "nbd", 00:41:57.580 "config": [] 00:41:57.580 } 00:41:57.580 ] 00:41:57.580 }' 00:41:57.580 11:47:43 keyring_file -- keyring/file.sh@114 -- # killprocess 1224638 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1224638 ']' 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1224638 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224638 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224638' 00:41:57.580 killing process with pid 1224638 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@967 -- # kill 1224638 00:41:57.580 Received shutdown signal, test time was about 1.000000 seconds 00:41:57.580 00:41:57.580 Latency(us) 00:41:57.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:57.580 =================================================================================================================== 00:41:57.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:57.580 11:47:43 keyring_file -- common/autotest_common.sh@972 -- # wait 1224638 00:41:58.957 11:47:44 keyring_file -- keyring/file.sh@117 -- # bperfpid=1226360 00:41:58.957 11:47:44 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1226360 /var/tmp/bperf.sock 00:41:58.957 11:47:44 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1226360 ']' 00:41:58.957 11:47:44 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:58.957 11:47:44 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:58.957 11:47:44 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:58.957 11:47:44 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:58.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:58.957 11:47:44 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:58.957 11:47:44 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:41:58.957 "subsystems": [ 00:41:58.957 { 00:41:58.957 "subsystem": "keyring", 00:41:58.957 "config": [ 00:41:58.957 { 00:41:58.957 "method": "keyring_file_add_key", 00:41:58.957 "params": { 00:41:58.957 "name": "key0", 00:41:58.957 "path": "/tmp/tmp.thfsWnoC3F" 00:41:58.957 } 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "method": "keyring_file_add_key", 00:41:58.957 "params": { 00:41:58.957 "name": "key1", 00:41:58.957 "path": "/tmp/tmp.XcaHthRYVn" 00:41:58.957 } 00:41:58.957 } 00:41:58.957 ] 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "subsystem": "iobuf", 00:41:58.957 "config": [ 00:41:58.957 { 00:41:58.957 "method": "iobuf_set_options", 00:41:58.957 "params": { 00:41:58.957 "small_pool_count": 8192, 00:41:58.957 "large_pool_count": 1024, 00:41:58.957 "small_bufsize": 8192, 00:41:58.957 "large_bufsize": 135168 00:41:58.957 } 00:41:58.957 } 00:41:58.957 ] 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "subsystem": "sock", 00:41:58.957 "config": [ 00:41:58.957 { 00:41:58.957 "method": "sock_set_default_impl", 00:41:58.957 "params": { 00:41:58.957 "impl_name": "posix" 00:41:58.957 } 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "method": "sock_impl_set_options", 00:41:58.957 "params": { 00:41:58.957 "impl_name": "ssl", 00:41:58.957 "recv_buf_size": 4096, 00:41:58.957 "send_buf_size": 4096, 00:41:58.957 "enable_recv_pipe": true, 00:41:58.957 "enable_quickack": false, 00:41:58.957 "enable_placement_id": 0, 00:41:58.957 "enable_zerocopy_send_server": true, 00:41:58.957 "enable_zerocopy_send_client": false, 00:41:58.957 "zerocopy_threshold": 0, 00:41:58.957 "tls_version": 0, 00:41:58.957 "enable_ktls": false 00:41:58.957 } 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "method": "sock_impl_set_options", 00:41:58.957 "params": { 00:41:58.957 "impl_name": "posix", 00:41:58.957 "recv_buf_size": 2097152, 00:41:58.957 "send_buf_size": 2097152, 00:41:58.957 "enable_recv_pipe": true, 00:41:58.957 "enable_quickack": false, 00:41:58.957 "enable_placement_id": 0, 00:41:58.957 "enable_zerocopy_send_server": true, 00:41:58.957 "enable_zerocopy_send_client": false, 00:41:58.957 "zerocopy_threshold": 0, 00:41:58.957 "tls_version": 0, 00:41:58.957 "enable_ktls": false 00:41:58.957 } 00:41:58.957 } 00:41:58.957 ] 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "subsystem": "vmd", 00:41:58.957 "config": [] 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "subsystem": "accel", 00:41:58.957 "config": [ 00:41:58.957 { 00:41:58.957 "method": "accel_set_options", 00:41:58.957 "params": { 00:41:58.957 "small_cache_size": 128, 00:41:58.957 "large_cache_size": 16, 00:41:58.957 "task_count": 2048, 00:41:58.957 "sequence_count": 2048, 00:41:58.957 "buf_count": 2048 00:41:58.957 } 00:41:58.957 } 00:41:58.957 ] 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "subsystem": "bdev", 00:41:58.957 "config": [ 00:41:58.957 { 00:41:58.957 "method": "bdev_set_options", 00:41:58.957 "params": { 00:41:58.957 "bdev_io_pool_size": 65535, 00:41:58.957 "bdev_io_cache_size": 256, 00:41:58.957 "bdev_auto_examine": true, 00:41:58.957 "iobuf_small_cache_size": 128, 00:41:58.957 "iobuf_large_cache_size": 16 00:41:58.957 } 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "method": "bdev_raid_set_options", 00:41:58.957 "params": { 00:41:58.957 "process_window_size_kb": 1024 00:41:58.957 } 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "method": "bdev_iscsi_set_options", 00:41:58.957 "params": { 00:41:58.957 "timeout_sec": 30 00:41:58.957 } 00:41:58.957 }, 00:41:58.957 { 00:41:58.957 "method": "bdev_nvme_set_options", 00:41:58.957 "params": { 00:41:58.957 "action_on_timeout": "none", 00:41:58.957 "timeout_us": 0, 00:41:58.957 "timeout_admin_us": 0, 00:41:58.958 "keep_alive_timeout_ms": 10000, 00:41:58.958 "arbitration_burst": 0, 00:41:58.958 "low_priority_weight": 0, 00:41:58.958 "medium_priority_weight": 0, 00:41:58.958 "high_priority_weight": 0, 00:41:58.958 "nvme_adminq_poll_period_us": 10000, 00:41:58.958 "nvme_ioq_poll_period_us": 0, 00:41:58.958 "io_queue_requests": 512, 00:41:58.958 "delay_cmd_submit": true, 00:41:58.958 "transport_retry_count": 4, 00:41:58.958 "bdev_retry_count": 3, 00:41:58.958 "transport_ack_timeout": 0, 00:41:58.958 "ctrlr_loss_timeout_sec": 0, 00:41:58.958 "reconnect_delay_sec": 0, 00:41:58.958 "fast_io_fail_timeout_sec": 0, 00:41:58.958 "disable_auto_failback": false, 00:41:58.958 "generate_uuids": false, 00:41:58.958 "transport_tos": 0, 00:41:58.958 "nvme_error_stat": false, 00:41:58.958 "rdma_srq_size": 0, 00:41:58.958 "io_path_stat": false, 00:41:58.958 "allow_accel_sequence": false, 00:41:58.958 "rdma_max_cq_size": 0, 00:41:58.958 "rdma_cm_event_timeout_ms": 0, 00:41:58.958 "dhchap_digests": [ 00:41:58.958 "sha256", 00:41:58.958 "sha384", 00:41:58.958 "sha512" 00:41:58.958 ], 00:41:58.958 "dhchap_dhgroups": [ 00:41:58.958 "null", 00:41:58.958 "ffdhe2048", 00:41:58.958 "ffdhe3072", 00:41:58.958 "ffdhe4096", 00:41:58.958 "ffdhe6144", 00:41:58.958 "ffdhe8192" 00:41:58.958 ] 00:41:58.958 } 00:41:58.958 }, 00:41:58.958 { 00:41:58.958 "method": "bdev_nvme_attach_controller", 00:41:58.958 "params": { 00:41:58.958 "name": "nvme0", 00:41:58.958 "trtype": "TCP", 00:41:58.958 "adrfam": "IPv4", 00:41:58.958 "traddr": "127.0.0.1", 00:41:58.958 "trsvcid": "4420", 00:41:58.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:58.958 "prchk_reftag": false, 00:41:58.958 "prchk_guard": false, 00:41:58.958 "ctrlr_loss_timeout_sec": 0, 00:41:58.958 "reconnect_delay_sec": 0, 00:41:58.958 "fast_io_fail_timeout_sec": 0, 00:41:58.958 "psk": "key0", 00:41:58.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:58.958 "hdgst": false, 00:41:58.958 "ddgst": false 00:41:58.958 } 00:41:58.958 }, 00:41:58.958 { 00:41:58.958 "method": "bdev_nvme_set_hotplug", 00:41:58.958 "params": { 00:41:58.958 "period_us": 100000, 00:41:58.958 "enable": false 00:41:58.958 } 00:41:58.958 }, 00:41:58.958 { 00:41:58.958 "method": "bdev_wait_for_examine" 00:41:58.958 } 00:41:58.958 ] 00:41:58.958 }, 00:41:58.958 { 00:41:58.958 "subsystem": "nbd", 00:41:58.958 "config": [] 00:41:58.958 } 00:41:58.958 ] 00:41:58.958 }' 00:41:58.958 11:47:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:58.958 [2024-07-12 11:47:44.981890] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:58.958 [2024-07-12 11:47:44.981977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226360 ] 00:41:58.958 EAL: No free 2048 kB hugepages reported on node 1 00:41:58.958 [2024-07-12 11:47:45.081518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.958 [2024-07-12 11:47:45.307436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:59.527 [2024-07-12 11:47:45.763804] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:59.527 11:47:45 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:59.527 11:47:45 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:41:59.527 11:47:45 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:41:59.527 11:47:45 keyring_file -- keyring/file.sh@120 -- # jq length 00:41:59.527 11:47:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.786 11:47:46 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:41:59.786 11:47:46 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:41:59.786 11:47:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:59.786 11:47:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:59.786 11:47:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:59.786 11:47:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:59.786 11:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.045 11:47:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:00.045 11:47:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:42:00.045 11:47:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:00.045 11:47:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:00.045 11:47:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:00.045 11:47:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:00.045 11:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:00.304 11:47:46 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:42:00.304 11:47:46 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:42:00.304 11:47:46 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:42:00.304 11:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:00.304 11:47:46 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:42:00.304 11:47:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:00.304 11:47:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.thfsWnoC3F /tmp/tmp.XcaHthRYVn 00:42:00.304 11:47:46 keyring_file -- keyring/file.sh@20 -- # killprocess 1226360 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1226360 ']' 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1226360 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226360 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226360' 00:42:00.304 killing process with pid 1226360 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@967 -- # kill 1226360 00:42:00.304 Received shutdown signal, test time was about 1.000000 seconds 00:42:00.304 00:42:00.304 Latency(us) 00:42:00.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:00.304 =================================================================================================================== 00:42:00.304 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:00.304 11:47:46 keyring_file -- common/autotest_common.sh@972 -- # wait 1226360 00:42:01.682 11:47:47 keyring_file -- keyring/file.sh@21 -- # killprocess 1224467 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1224467 ']' 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1224467 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@953 -- # uname 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224467 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224467' 00:42:01.682 killing process with pid 1224467 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@967 -- # kill 1224467 00:42:01.682 [2024-07-12 11:47:47.722815] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:42:01.682 11:47:47 keyring_file -- common/autotest_common.sh@972 -- # wait 1224467 00:42:04.218 00:42:04.218 real 0m16.459s 00:42:04.218 user 0m34.990s 00:42:04.218 sys 0m2.931s 00:42:04.218 11:47:50 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.218 11:47:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:04.218 ************************************ 00:42:04.218 END TEST keyring_file 00:42:04.218 ************************************ 00:42:04.218 11:47:50 -- common/autotest_common.sh@1142 -- # return 0 00:42:04.218 11:47:50 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:42:04.218 11:47:50 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:04.218 11:47:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:04.218 11:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:04.218 11:47:50 -- common/autotest_common.sh@10 -- # set +x 00:42:04.218 ************************************ 00:42:04.218 START TEST keyring_linux 00:42:04.218 ************************************ 00:42:04.218 11:47:50 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:04.218 * Looking for test storage... 00:42:04.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:04.218 11:47:50 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:04.218 11:47:50 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:04.218 11:47:50 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:04.218 11:47:50 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.219 11:47:50 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.219 11:47:50 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.219 11:47:50 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.219 11:47:50 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.219 11:47:50 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.219 11:47:50 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:04.219 11:47:50 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:04.219 /tmp/:spdk-test:key0 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:42:04.219 11:47:50 keyring_linux -- nvmf/common.sh@705 -- # python - 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:04.219 11:47:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:04.219 /tmp/:spdk-test:key1 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1227364 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1227364 00:42:04.219 11:47:50 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:04.219 11:47:50 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1227364 ']' 00:42:04.219 11:47:50 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:04.219 11:47:50 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:04.219 11:47:50 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:04.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:04.219 11:47:50 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:04.219 11:47:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:04.219 [2024-07-12 11:47:50.522509] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:04.219 [2024-07-12 11:47:50.522603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227364 ] 00:42:04.478 EAL: No free 2048 kB hugepages reported on node 1 00:42:04.478 [2024-07-12 11:47:50.627716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:04.478 [2024-07-12 11:47:50.830962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:05.413 11:47:51 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:05.413 11:47:51 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:05.413 11:47:51 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:05.413 11:47:51 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.413 11:47:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:05.413 [2024-07-12 11:47:51.730142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:05.413 null0 00:42:05.413 [2024-07-12 11:47:51.762167] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:05.413 [2024-07-12 11:47:51.762546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:05.672 11:47:51 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.672 11:47:51 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:05.672 321621491 00:42:05.672 11:47:51 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:05.672 952453753 00:42:05.672 11:47:51 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1227595 00:42:05.672 11:47:51 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1227595 /var/tmp/bperf.sock 00:42:05.672 11:47:51 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1227595 ']' 00:42:05.672 11:47:51 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:05.672 11:47:51 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:05.672 11:47:51 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:05.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:05.672 11:47:51 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:05.672 11:47:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:05.672 11:47:51 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:05.672 [2024-07-12 11:47:51.857714] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:05.672 [2024-07-12 11:47:51.857802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227595 ] 00:42:05.672 EAL: No free 2048 kB hugepages reported on node 1 00:42:05.672 [2024-07-12 11:47:51.959478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:05.930 [2024-07-12 11:47:52.181345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:06.530 11:47:52 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:06.530 11:47:52 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:42:06.530 11:47:52 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:06.530 11:47:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:06.530 11:47:52 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:06.530 11:47:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:07.101 11:47:53 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:07.101 11:47:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:07.358 [2024-07-12 11:47:53.481934] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:07.358 nvme0n1 00:42:07.358 11:47:53 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:07.358 11:47:53 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:07.358 11:47:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:07.358 11:47:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:07.358 11:47:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:07.358 11:47:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:07.615 11:47:53 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:07.615 11:47:53 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:07.615 11:47:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@25 -- # sn=321621491 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@26 -- # [[ 321621491 == \3\2\1\6\2\1\4\9\1 ]] 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 321621491 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:07.615 11:47:53 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:07.873 Running I/O for 1 seconds... 00:42:08.808 00:42:08.808 Latency(us) 00:42:08.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:08.808 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:08.808 nvme0n1 : 1.01 15294.55 59.74 0.00 0.00 8329.10 7066.49 18919.96 00:42:08.808 =================================================================================================================== 00:42:08.808 Total : 15294.55 59.74 0.00 0.00 8329.10 7066.49 18919.96 00:42:08.808 0 00:42:08.808 11:47:55 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:08.808 11:47:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:09.066 11:47:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:09.066 11:47:55 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:09.066 11:47:55 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:42:09.066 11:47:55 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:09.066 11:47:55 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:42:09.066 11:47:55 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:09.066 11:47:55 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:42:09.066 11:47:55 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:09.066 11:47:55 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:09.066 11:47:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:09.326 [2024-07-12 11:47:55.555183] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:09.326 [2024-07-12 11:47:55.555345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331d80 (107): Transport endpoint is not connected 00:42:09.326 [2024-07-12 11:47:55.556329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331d80 (9): Bad file descriptor 00:42:09.326 [2024-07-12 11:47:55.557325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:09.326 [2024-07-12 11:47:55.557353] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:09.326 [2024-07-12 11:47:55.557364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:09.326 request: 00:42:09.326 { 00:42:09.326 "name": "nvme0", 00:42:09.326 "trtype": "tcp", 00:42:09.326 "traddr": "127.0.0.1", 00:42:09.326 "adrfam": "ipv4", 00:42:09.326 "trsvcid": "4420", 00:42:09.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:09.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:09.326 "prchk_reftag": false, 00:42:09.326 "prchk_guard": false, 00:42:09.326 "hdgst": false, 00:42:09.326 "ddgst": false, 00:42:09.326 "psk": ":spdk-test:key1", 00:42:09.326 "method": "bdev_nvme_attach_controller", 00:42:09.326 "req_id": 1 00:42:09.326 } 00:42:09.326 Got JSON-RPC error response 00:42:09.326 response: 00:42:09.326 { 00:42:09.326 "code": -5, 00:42:09.326 "message": "Input/output error" 00:42:09.326 } 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@33 -- # sn=321621491 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 321621491 00:42:09.326 1 links removed 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@33 -- # sn=952453753 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 952453753 00:42:09.326 1 links removed 00:42:09.326 11:47:55 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1227595 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1227595 ']' 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1227595 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1227595 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1227595' 00:42:09.326 killing process with pid 1227595 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@967 -- # kill 1227595 00:42:09.326 Received shutdown signal, test time was about 1.000000 seconds 00:42:09.326 00:42:09.326 Latency(us) 00:42:09.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:09.326 =================================================================================================================== 00:42:09.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:09.326 11:47:55 keyring_linux -- common/autotest_common.sh@972 -- # wait 1227595 00:42:10.705 11:47:56 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1227364 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1227364 ']' 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1227364 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1227364 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1227364' 00:42:10.705 killing process with pid 1227364 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@967 -- # kill 1227364 00:42:10.705 11:47:56 keyring_linux -- common/autotest_common.sh@972 -- # wait 1227364 00:42:13.236 00:42:13.236 real 0m8.929s 00:42:13.236 user 0m14.249s 00:42:13.236 sys 0m1.612s 00:42:13.236 11:47:59 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:13.236 11:47:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:13.236 ************************************ 00:42:13.236 END TEST keyring_linux 00:42:13.236 ************************************ 00:42:13.236 11:47:59 -- common/autotest_common.sh@1142 -- # return 0 00:42:13.236 11:47:59 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:42:13.236 11:47:59 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:42:13.237 11:47:59 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:42:13.237 11:47:59 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:42:13.237 11:47:59 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:42:13.237 11:47:59 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:42:13.237 11:47:59 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:42:13.237 11:47:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:13.237 11:47:59 -- common/autotest_common.sh@10 -- # set +x 00:42:13.237 11:47:59 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:42:13.237 11:47:59 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:13.237 11:47:59 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:13.237 11:47:59 -- common/autotest_common.sh@10 -- # set +x 00:42:17.425 INFO: APP EXITING 00:42:17.425 INFO: killing all VMs 00:42:17.425 INFO: killing vhost app 00:42:17.425 INFO: EXIT DONE 00:42:19.961 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:42:19.961 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:42:19.961 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:42:22.496 Cleaning 00:42:22.496 Removing: /var/run/dpdk/spdk0/config 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:22.496 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:22.496 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:22.496 Removing: /var/run/dpdk/spdk1/config 00:42:22.496 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:22.496 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:22.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:22.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:22.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:22.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:22.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:22.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:22.497 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:22.497 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:22.497 Removing: /var/run/dpdk/spdk1/mp_socket 00:42:22.497 Removing: /var/run/dpdk/spdk2/config 00:42:22.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:22.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:22.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:22.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:22.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:22.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:22.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:22.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:22.756 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:22.756 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:22.756 Removing: /var/run/dpdk/spdk3/config 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:22.756 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:22.756 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:22.756 Removing: /var/run/dpdk/spdk4/config 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:22.756 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:22.757 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:22.757 Removing: /dev/shm/bdev_svc_trace.1 00:42:22.757 Removing: /dev/shm/nvmf_trace.0 00:42:22.757 Removing: /dev/shm/spdk_tgt_trace.pid725447 00:42:22.757 Removing: /var/run/dpdk/spdk0 00:42:22.757 Removing: /var/run/dpdk/spdk1 00:42:22.757 Removing: /var/run/dpdk/spdk2 00:42:22.757 Removing: /var/run/dpdk/spdk3 00:42:22.757 Removing: /var/run/dpdk/spdk4 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1003092 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1008906 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1013919 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1051211 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1055470 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1061826 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1064412 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1066645 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1071611 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1076086 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1083747 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1083896 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1088625 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1088858 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1089092 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1089553 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1089562 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1090961 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1092724 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1094370 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1095970 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1097576 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1099214 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1105816 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1106524 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1108276 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1109314 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1115248 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1118224 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1123846 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1129401 00:42:22.757 Removing: /var/run/dpdk/spdk_pid1138179 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1145627 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1145700 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1163987 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1164797 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1165611 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1166405 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1167741 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1168445 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1169315 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1170071 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1174542 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1175009 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1181292 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1181568 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1184013 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1192568 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1192713 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1197807 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1199926 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1202100 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1203372 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1205561 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1206841 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1216017 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1216477 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1216975 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1219748 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1220318 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1220782 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1224467 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1224638 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1226360 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1227364 00:42:23.016 Removing: /var/run/dpdk/spdk_pid1227595 00:42:23.016 Removing: /var/run/dpdk/spdk_pid721474 00:42:23.016 Removing: /var/run/dpdk/spdk_pid723001 00:42:23.016 Removing: /var/run/dpdk/spdk_pid725447 00:42:23.016 Removing: /var/run/dpdk/spdk_pid726517 00:42:23.016 Removing: /var/run/dpdk/spdk_pid727723 00:42:23.016 Removing: /var/run/dpdk/spdk_pid728422 00:42:23.016 Removing: /var/run/dpdk/spdk_pid729846 00:42:23.016 Removing: /var/run/dpdk/spdk_pid730082 00:42:23.016 Removing: /var/run/dpdk/spdk_pid730694 00:42:23.016 Removing: /var/run/dpdk/spdk_pid732430 00:42:23.016 Removing: /var/run/dpdk/spdk_pid733999 00:42:23.016 Removing: /var/run/dpdk/spdk_pid734874 00:42:23.016 Removing: /var/run/dpdk/spdk_pid735619 00:42:23.016 Removing: /var/run/dpdk/spdk_pid736382 00:42:23.016 Removing: /var/run/dpdk/spdk_pid737128 00:42:23.016 Removing: /var/run/dpdk/spdk_pid737395 00:42:23.016 Removing: /var/run/dpdk/spdk_pid737863 00:42:23.016 Removing: /var/run/dpdk/spdk_pid738148 00:42:23.016 Removing: /var/run/dpdk/spdk_pid739247 00:42:23.016 Removing: /var/run/dpdk/spdk_pid743066 00:42:23.016 Removing: /var/run/dpdk/spdk_pid743798 00:42:23.016 Removing: /var/run/dpdk/spdk_pid744533 00:42:23.016 Removing: /var/run/dpdk/spdk_pid744765 00:42:23.016 Removing: /var/run/dpdk/spdk_pid746423 00:42:23.016 Removing: /var/run/dpdk/spdk_pid746659 00:42:23.016 Removing: /var/run/dpdk/spdk_pid748530 00:42:23.016 Removing: /var/run/dpdk/spdk_pid748704 00:42:23.016 Removing: /var/run/dpdk/spdk_pid749260 00:42:23.016 Removing: /var/run/dpdk/spdk_pid749500 00:42:23.016 Removing: /var/run/dpdk/spdk_pid750150 00:42:23.016 Removing: /var/run/dpdk/spdk_pid750349 00:42:23.016 Removing: /var/run/dpdk/spdk_pid751925 00:42:23.016 Removing: /var/run/dpdk/spdk_pid752183 00:42:23.016 Removing: /var/run/dpdk/spdk_pid752506 00:42:23.016 Removing: /var/run/dpdk/spdk_pid753206 00:42:23.016 Removing: /var/run/dpdk/spdk_pid753583 00:42:23.275 Removing: /var/run/dpdk/spdk_pid753974 00:42:23.275 Removing: /var/run/dpdk/spdk_pid754447 00:42:23.275 Removing: /var/run/dpdk/spdk_pid754929 00:42:23.275 Removing: /var/run/dpdk/spdk_pid755402 00:42:23.275 Removing: /var/run/dpdk/spdk_pid755798 00:42:23.275 Removing: /var/run/dpdk/spdk_pid756165 00:42:23.275 Removing: /var/run/dpdk/spdk_pid756623 00:42:23.275 Removing: /var/run/dpdk/spdk_pid757105 00:42:23.275 Removing: /var/run/dpdk/spdk_pid757578 00:42:23.276 Removing: /var/run/dpdk/spdk_pid758060 00:42:23.276 Removing: /var/run/dpdk/spdk_pid758534 00:42:23.276 Removing: /var/run/dpdk/spdk_pid759016 00:42:23.276 Removing: /var/run/dpdk/spdk_pid759487 00:42:23.276 Removing: /var/run/dpdk/spdk_pid759966 00:42:23.276 Removing: /var/run/dpdk/spdk_pid760361 00:42:23.276 Removing: /var/run/dpdk/spdk_pid760742 00:42:23.276 Removing: /var/run/dpdk/spdk_pid761201 00:42:23.276 Removing: /var/run/dpdk/spdk_pid761677 00:42:23.276 Removing: /var/run/dpdk/spdk_pid762162 00:42:23.276 Removing: /var/run/dpdk/spdk_pid762641 00:42:23.276 Removing: /var/run/dpdk/spdk_pid763118 00:42:23.276 Removing: /var/run/dpdk/spdk_pid763636 00:42:23.276 Removing: /var/run/dpdk/spdk_pid764400 00:42:23.276 Removing: /var/run/dpdk/spdk_pid768503 00:42:23.276 Removing: /var/run/dpdk/spdk_pid852535 00:42:23.276 Removing: /var/run/dpdk/spdk_pid857449 00:42:23.276 Removing: /var/run/dpdk/spdk_pid867570 00:42:23.276 Removing: /var/run/dpdk/spdk_pid873113 00:42:23.276 Removing: /var/run/dpdk/spdk_pid877158 00:42:23.276 Removing: /var/run/dpdk/spdk_pid877823 00:42:23.276 Removing: /var/run/dpdk/spdk_pid884061 00:42:23.276 Removing: /var/run/dpdk/spdk_pid893573 00:42:23.276 Removing: /var/run/dpdk/spdk_pid893967 00:42:23.276 Removing: /var/run/dpdk/spdk_pid898665 00:42:23.276 Removing: /var/run/dpdk/spdk_pid905167 00:42:23.276 Removing: /var/run/dpdk/spdk_pid907885 00:42:23.276 Removing: /var/run/dpdk/spdk_pid918857 00:42:23.276 Removing: /var/run/dpdk/spdk_pid928053 00:42:23.276 Removing: /var/run/dpdk/spdk_pid929905 00:42:23.276 Removing: /var/run/dpdk/spdk_pid931044 00:42:23.276 Removing: /var/run/dpdk/spdk_pid948564 00:42:23.276 Removing: /var/run/dpdk/spdk_pid953282 00:42:23.276 Removing: /var/run/dpdk/spdk_pid978855 00:42:23.276 Removing: /var/run/dpdk/spdk_pid983577 00:42:23.276 Removing: /var/run/dpdk/spdk_pid985178 00:42:23.276 Removing: /var/run/dpdk/spdk_pid987361 00:42:23.276 Removing: /var/run/dpdk/spdk_pid987954 00:42:23.276 Removing: /var/run/dpdk/spdk_pid988457 00:42:23.276 Removing: /var/run/dpdk/spdk_pid988706 00:42:23.276 Removing: /var/run/dpdk/spdk_pid989674 00:42:23.276 Removing: /var/run/dpdk/spdk_pid991730 00:42:23.276 Removing: /var/run/dpdk/spdk_pid993185 00:42:23.276 Removing: /var/run/dpdk/spdk_pid994134 00:42:23.276 Removing: /var/run/dpdk/spdk_pid996691 00:42:23.276 Removing: /var/run/dpdk/spdk_pid997642 00:42:23.276 Removing: /var/run/dpdk/spdk_pid998697 00:42:23.276 Clean 00:42:23.534 11:48:09 -- common/autotest_common.sh@1451 -- # return 0 00:42:23.534 11:48:09 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:42:23.534 11:48:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:23.534 11:48:09 -- common/autotest_common.sh@10 -- # set +x 00:42:23.534 11:48:09 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:42:23.534 11:48:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:23.534 11:48:09 -- common/autotest_common.sh@10 -- # set +x 00:42:23.534 11:48:09 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:23.534 11:48:09 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:23.534 11:48:09 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:23.534 11:48:09 -- spdk/autotest.sh@391 -- # hash lcov 00:42:23.534 11:48:09 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:42:23.534 11:48:09 -- spdk/autotest.sh@393 -- # hostname 00:42:23.534 11:48:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:23.534 geninfo: WARNING: invalid characters removed from testname! 00:42:45.464 11:48:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:45.464 11:48:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:46.876 11:48:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:48.781 11:48:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:50.160 11:48:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:52.065 11:48:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:53.970 11:48:39 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:53.970 11:48:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:53.970 11:48:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:53.970 11:48:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:53.970 11:48:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:53.970 11:48:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.970 11:48:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.970 11:48:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.970 11:48:39 -- paths/export.sh@5 -- $ export PATH 00:42:53.971 11:48:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.971 11:48:39 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:42:53.971 11:48:39 -- common/autobuild_common.sh@444 -- $ date +%s 00:42:53.971 11:48:39 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720777719.XXXXXX 00:42:53.971 11:48:39 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720777719.zruv8W 00:42:53.971 11:48:39 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:42:53.971 11:48:39 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:42:53.971 11:48:39 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:42:53.971 11:48:39 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:42:53.971 11:48:39 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:42:53.971 11:48:39 -- common/autobuild_common.sh@460 -- $ get_config_params 00:42:53.971 11:48:39 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:42:53.971 11:48:39 -- common/autotest_common.sh@10 -- $ set +x 00:42:53.971 11:48:39 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:42:53.971 11:48:39 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:42:53.971 11:48:39 -- pm/common@17 -- $ local monitor 00:42:53.971 11:48:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:53.971 11:48:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:53.971 11:48:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:53.971 11:48:39 -- pm/common@21 -- $ date +%s 00:42:53.971 11:48:39 -- pm/common@21 -- $ date +%s 00:42:53.971 11:48:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:53.971 11:48:39 -- pm/common@25 -- $ sleep 1 00:42:53.971 11:48:39 -- pm/common@21 -- $ date +%s 00:42:53.971 11:48:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720777719 00:42:53.971 11:48:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720777719 00:42:53.971 11:48:39 -- pm/common@21 -- $ date +%s 00:42:53.971 11:48:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720777719 00:42:53.971 11:48:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720777719 00:42:53.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720777719_collect-vmstat.pm.log 00:42:53.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720777719_collect-cpu-load.pm.log 00:42:53.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720777719_collect-cpu-temp.pm.log 00:42:53.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720777719_collect-bmc-pm.bmc.pm.log 00:42:54.910 11:48:40 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:42:54.910 11:48:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:42:54.910 11:48:40 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:54.910 11:48:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:54.910 11:48:40 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:54.910 11:48:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:42:54.910 11:48:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:42:54.910 11:48:40 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:54.910 11:48:40 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:54.910 11:48:40 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:54.910 11:48:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:42:54.910 11:48:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:54.910 11:48:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:54.910 11:48:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:54.910 11:48:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:54.910 11:48:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:42:54.910 11:48:41 -- pm/common@44 -- $ pid=1239705 00:42:54.910 11:48:41 -- pm/common@50 -- $ kill -TERM 1239705 00:42:54.910 11:48:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:54.910 11:48:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:42:54.910 11:48:41 -- pm/common@44 -- $ pid=1239706 00:42:54.910 11:48:41 -- pm/common@50 -- $ kill -TERM 1239706 00:42:54.910 11:48:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:54.910 11:48:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:42:54.910 11:48:41 -- pm/common@44 -- $ pid=1239709 00:42:54.910 11:48:41 -- pm/common@50 -- $ kill -TERM 1239709 00:42:54.910 11:48:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:54.910 11:48:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:42:54.910 11:48:41 -- pm/common@44 -- $ pid=1239736 00:42:54.910 11:48:41 -- pm/common@50 -- $ sudo -E kill -TERM 1239736 00:42:54.910 + [[ -n 617701 ]] 00:42:54.910 + sudo kill 617701 00:42:54.922 [Pipeline] } 00:42:54.940 [Pipeline] // stage 00:42:54.945 [Pipeline] } 00:42:54.963 [Pipeline] // timeout 00:42:54.968 [Pipeline] } 00:42:54.988 [Pipeline] // catchError 00:42:54.994 [Pipeline] } 00:42:55.015 [Pipeline] // wrap 00:42:55.023 [Pipeline] } 00:42:55.039 [Pipeline] // catchError 00:42:55.049 [Pipeline] stage 00:42:55.052 [Pipeline] { (Epilogue) 00:42:55.070 [Pipeline] catchError 00:42:55.072 [Pipeline] { 00:42:55.086 [Pipeline] echo 00:42:55.088 Cleanup processes 00:42:55.094 [Pipeline] sh 00:42:55.381 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:55.381 1239830 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:42:55.381 1240105 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:55.398 [Pipeline] sh 00:42:55.689 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:55.689 ++ grep -v 'sudo pgrep' 00:42:55.689 ++ awk '{print $1}' 00:42:55.689 + sudo kill -9 1239830 00:42:55.703 [Pipeline] sh 00:42:55.996 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:05.986 [Pipeline] sh 00:43:06.271 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:06.271 Artifacts sizes are good 00:43:06.286 [Pipeline] archiveArtifacts 00:43:06.293 Archiving artifacts 00:43:06.529 [Pipeline] sh 00:43:06.841 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:06.852 [Pipeline] cleanWs 00:43:06.860 [WS-CLEANUP] Deleting project workspace... 00:43:06.860 [WS-CLEANUP] Deferred wipeout is used... 00:43:06.865 [WS-CLEANUP] done 00:43:06.866 [Pipeline] } 00:43:06.883 [Pipeline] // catchError 00:43:06.893 [Pipeline] sh 00:43:07.181 + logger -p user.info -t JENKINS-CI 00:43:07.189 [Pipeline] } 00:43:07.203 [Pipeline] // stage 00:43:07.208 [Pipeline] } 00:43:07.223 [Pipeline] // node 00:43:07.227 [Pipeline] End of Pipeline 00:43:07.256 Finished: SUCCESS